Dec 13 14:33:36.100985 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:33:36.101023 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:33:36.101040 kernel: BIOS-provided physical RAM map: Dec 13 14:33:36.101052 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:33:36.101062 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:33:36.101073 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:33:36.101090 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 14:33:36.101101 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 14:33:36.101112 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 14:33:36.101123 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:33:36.101135 kernel: NX (Execute Disable) protection: active Dec 13 14:33:36.101147 kernel: SMBIOS 2.7 present. Dec 13 14:33:36.101158 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 14:33:36.101171 kernel: Hypervisor detected: KVM Dec 13 14:33:36.101189 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:33:36.101202 kernel: kvm-clock: cpu 0, msr 6c19a001, primary cpu clock Dec 13 14:33:36.101215 kernel: kvm-clock: using sched offset of 7508817820 cycles Dec 13 14:33:36.101228 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:33:36.101242 kernel: tsc: Detected 2499.998 MHz processor Dec 13 14:33:36.101255 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:33:36.101271 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:33:36.101284 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 14:33:36.101315 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:33:36.101327 kernel: Using GB pages for direct mapping Dec 13 14:33:36.101339 kernel: ACPI: Early table checksum verification disabled Dec 13 14:33:36.101352 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 14:33:36.101365 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 14:33:36.101378 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:33:36.101392 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 14:33:36.101407 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 14:33:36.101420 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:33:36.101433 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:33:36.101446 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 14:33:36.101459 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:33:36.101471 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 14:33:36.101484 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 14:33:36.101497 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:33:36.101513 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 14:33:36.101526 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 14:33:36.101539 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 14:33:36.101558 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 14:33:36.101572 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 14:33:36.101586 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 14:33:36.101600 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 14:33:36.101616 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 14:33:36.101630 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 14:33:36.101643 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 14:33:36.101656 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:33:36.101767 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:33:36.101783 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 14:33:36.101797 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 14:33:36.101811 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 14:33:36.101828 kernel: Zone ranges: Dec 13 14:33:36.101842 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:33:36.101856 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 14:33:36.101870 kernel: Normal empty Dec 13 14:33:36.101884 kernel: Movable zone start for each node Dec 13 14:33:36.101897 kernel: Early memory node ranges Dec 13 14:33:36.101911 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:33:36.101925 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 14:33:36.101939 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 14:33:36.101956 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:33:36.101970 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:33:36.101985 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 14:33:36.101999 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:33:36.102011 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:33:36.102024 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 14:33:36.102037 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:33:36.102050 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:33:36.102064 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:33:36.102081 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:33:36.102095 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:33:36.102109 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:33:36.102123 kernel: TSC deadline timer available Dec 13 14:33:36.102137 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:33:36.102151 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 14:33:36.102165 kernel: Booting paravirtualized kernel on KVM Dec 13 14:33:36.102179 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:33:36.102194 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:33:36.102211 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:33:36.102224 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:33:36.102238 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:33:36.102251 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 14:33:36.102265 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:33:36.102279 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:33:36.102330 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 14:33:36.102344 kernel: Policy zone: DMA32 Dec 13 14:33:36.102361 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:33:36.102380 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:33:36.102393 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:33:36.102407 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:33:36.102422 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:33:36.102436 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 123080K reserved, 0K cma-reserved) Dec 13 14:33:36.102451 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:33:36.102465 kernel: Kernel/User page tables isolation: enabled Dec 13 14:33:36.102478 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:33:36.102495 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:33:36.102509 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:33:36.102525 kernel: rcu: RCU event tracing is enabled. Dec 13 14:33:36.102539 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:33:36.102554 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:33:36.102568 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:33:36.102582 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:33:36.102596 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:33:36.102610 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:33:36.102626 kernel: random: crng init done Dec 13 14:33:36.102640 kernel: Console: colour VGA+ 80x25 Dec 13 14:33:36.102654 kernel: printk: console [ttyS0] enabled Dec 13 14:33:36.102668 kernel: ACPI: Core revision 20210730 Dec 13 14:33:36.102682 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 14:33:36.102696 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:33:36.102710 kernel: x2apic enabled Dec 13 14:33:36.102724 kernel: Switched APIC routing to physical x2apic. Dec 13 14:33:36.102739 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 14:33:36.102756 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 14:33:36.102769 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:33:36.102784 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:33:36.102798 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:33:36.102823 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:33:36.102840 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:33:36.102854 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:33:36.102869 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:33:36.102884 kernel: RETBleed: Vulnerable Dec 13 14:33:36.102897 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:33:36.102911 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:33:36.102926 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:33:36.102941 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:33:36.102955 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:33:36.102973 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:33:36.102989 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:33:36.103003 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 14:33:36.103018 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 14:33:36.103032 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:33:36.103050 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:33:36.103065 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:33:36.103080 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 14:33:36.103095 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:33:36.103109 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 14:33:36.103124 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 14:33:36.103138 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 14:33:36.103153 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 14:33:36.103169 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 14:33:36.103184 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 14:33:36.103199 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 14:33:36.103214 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:33:36.103232 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:33:36.103246 kernel: LSM: Security Framework initializing Dec 13 14:33:36.103261 kernel: SELinux: Initializing. Dec 13 14:33:36.103276 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:33:36.103291 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:33:36.103321 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:33:36.103333 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:33:36.103347 kernel: signal: max sigframe size: 3632 Dec 13 14:33:36.103360 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:33:36.103374 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:33:36.103392 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:33:36.103406 kernel: x86: Booting SMP configuration: Dec 13 14:33:36.103428 kernel: .... node #0, CPUs: #1 Dec 13 14:33:36.103442 kernel: kvm-clock: cpu 1, msr 6c19a041, secondary cpu clock Dec 13 14:33:36.103456 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 14:33:36.103471 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:33:36.103578 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:33:36.103595 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:33:36.103608 kernel: smpboot: Max logical packages: 1 Dec 13 14:33:36.103625 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 14:33:36.103640 kernel: devtmpfs: initialized Dec 13 14:33:36.103655 kernel: x86/mm: Memory block size: 128MB Dec 13 14:33:36.103670 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:33:36.103685 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:33:36.103700 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:33:36.103714 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:33:36.103729 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:33:36.103744 kernel: audit: type=2000 audit(1734100415.613:1): state=initialized audit_enabled=0 res=1 Dec 13 14:33:36.103760 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:33:36.103775 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:33:36.103790 kernel: cpuidle: using governor menu Dec 13 14:33:36.103804 kernel: ACPI: bus type PCI registered Dec 13 14:33:36.103819 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:33:36.103833 kernel: dca service started, version 1.12.1 Dec 13 14:33:36.103848 kernel: PCI: Using configuration type 1 for base access Dec 13 14:33:36.103863 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:33:36.103878 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:33:36.103895 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:33:36.103909 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:33:36.103924 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:33:36.103938 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:33:36.104012 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:33:36.104027 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:33:36.104041 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:33:36.104056 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:33:36.104200 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:33:36.104222 kernel: ACPI: Interpreter enabled Dec 13 14:33:36.104237 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:33:36.104252 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:33:36.104267 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:33:36.104281 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:33:36.104314 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:33:36.104526 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:33:36.104654 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:33:36.104675 kernel: acpiphp: Slot [3] registered Dec 13 14:33:36.104689 kernel: acpiphp: Slot [4] registered Dec 13 14:33:36.104703 kernel: acpiphp: Slot [5] registered Dec 13 14:33:36.104717 kernel: acpiphp: Slot [6] registered Dec 13 14:33:36.104731 kernel: acpiphp: Slot [7] registered Dec 13 14:33:36.104745 kernel: acpiphp: Slot [8] registered Dec 13 14:33:36.104759 kernel: acpiphp: Slot [9] registered Dec 13 14:33:36.104772 kernel: acpiphp: Slot [10] registered Dec 13 14:33:36.104785 kernel: acpiphp: Slot [11] registered Dec 13 14:33:36.104802 kernel: acpiphp: Slot [12] registered Dec 13 14:33:36.104814 kernel: acpiphp: Slot [13] registered Dec 13 14:33:36.104827 kernel: acpiphp: Slot [14] registered Dec 13 14:33:36.104840 kernel: acpiphp: Slot [15] registered Dec 13 14:33:36.104854 kernel: acpiphp: Slot [16] registered Dec 13 14:33:36.104868 kernel: acpiphp: Slot [17] registered Dec 13 14:33:36.104880 kernel: acpiphp: Slot [18] registered Dec 13 14:33:36.104893 kernel: acpiphp: Slot [19] registered Dec 13 14:33:36.104908 kernel: acpiphp: Slot [20] registered Dec 13 14:33:36.104926 kernel: acpiphp: Slot [21] registered Dec 13 14:33:36.104941 kernel: acpiphp: Slot [22] registered Dec 13 14:33:36.104953 kernel: acpiphp: Slot [23] registered Dec 13 14:33:36.104967 kernel: acpiphp: Slot [24] registered Dec 13 14:33:36.104984 kernel: acpiphp: Slot [25] registered Dec 13 14:33:36.104998 kernel: acpiphp: Slot [26] registered Dec 13 14:33:36.105011 kernel: acpiphp: Slot [27] registered Dec 13 14:33:36.105023 kernel: acpiphp: Slot [28] registered Dec 13 14:33:36.105035 kernel: acpiphp: Slot [29] registered Dec 13 14:33:36.105046 kernel: acpiphp: Slot [30] registered Dec 13 14:33:36.105062 kernel: acpiphp: Slot [31] registered Dec 13 14:33:36.105075 kernel: PCI host bridge to bus 0000:00 Dec 13 14:33:36.105207 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:33:36.105330 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:33:36.105446 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:33:36.105558 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 14:33:36.105725 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:33:36.105880 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:33:36.106020 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 14:33:36.106158 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 14:33:36.106288 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:33:36.106439 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 14:33:36.106650 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 14:33:36.106780 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 14:33:36.106911 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 14:33:36.107039 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 14:33:36.107165 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 14:33:36.107311 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 14:33:36.107463 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 14:33:36.107592 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 14:33:36.107718 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 14:33:36.107849 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:33:36.107983 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:33:36.108312 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 14:33:36.108573 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:33:36.108710 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 14:33:36.108731 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:33:36.108751 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:33:36.108767 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:33:36.108783 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:33:36.108797 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:33:36.108812 kernel: iommu: Default domain type: Translated Dec 13 14:33:36.108827 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:33:36.108957 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 14:33:36.109088 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:33:36.109218 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 14:33:36.109240 kernel: vgaarb: loaded Dec 13 14:33:36.109256 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:33:36.109271 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:33:36.109286 kernel: PTP clock support registered Dec 13 14:33:36.109321 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:33:36.109333 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:33:36.109345 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:33:36.109357 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 14:33:36.109374 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 14:33:36.109388 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 14:33:36.109402 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:33:36.109414 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:33:36.109429 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:33:36.109442 kernel: pnp: PnP ACPI init Dec 13 14:33:36.109455 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:33:36.109469 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:33:36.109482 kernel: NET: Registered PF_INET protocol family Dec 13 14:33:36.109497 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:33:36.109509 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:33:36.109522 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:33:36.109533 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:33:36.109545 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:33:36.109558 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:33:36.109571 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:33:36.109584 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:33:36.109598 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:33:36.109613 kernel: NET: Registered PF_XDP protocol family Dec 13 14:33:36.109818 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:33:36.109929 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:33:36.110034 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:33:36.110139 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 14:33:36.110261 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:33:36.110423 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 14:33:36.110445 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:33:36.110459 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:33:36.110472 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 14:33:36.110485 kernel: clocksource: Switched to clocksource tsc Dec 13 14:33:36.110499 kernel: Initialise system trusted keyrings Dec 13 14:33:36.110512 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:33:36.110525 kernel: Key type asymmetric registered Dec 13 14:33:36.110539 kernel: Asymmetric key parser 'x509' registered Dec 13 14:33:36.110550 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:33:36.110566 kernel: io scheduler mq-deadline registered Dec 13 14:33:36.110578 kernel: io scheduler kyber registered Dec 13 14:33:36.110590 kernel: io scheduler bfq registered Dec 13 14:33:36.110603 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:33:36.110616 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:33:36.110628 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:33:36.110641 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:33:36.110653 kernel: i8042: Warning: Keylock active Dec 13 14:33:36.110665 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:33:36.110679 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:33:36.110803 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:33:36.111810 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:33:36.111937 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:33:35 UTC (1734100415) Dec 13 14:33:36.112060 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:33:36.112080 kernel: intel_pstate: CPU model not supported Dec 13 14:33:36.112180 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:33:36.112196 kernel: Segment Routing with IPv6 Dec 13 14:33:36.112218 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:33:36.112233 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:33:36.112248 kernel: Key type dns_resolver registered Dec 13 14:33:36.112262 kernel: IPI shorthand broadcast: enabled Dec 13 14:33:36.112278 kernel: sched_clock: Marking stable (417276496, 268524752)->(815476726, -129675478) Dec 13 14:33:36.112305 kernel: registered taskstats version 1 Dec 13 14:33:36.112320 kernel: Loading compiled-in X.509 certificates Dec 13 14:33:36.112336 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:33:36.112350 kernel: Key type .fscrypt registered Dec 13 14:33:36.112368 kernel: Key type fscrypt-provisioning registered Dec 13 14:33:36.112384 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:33:36.112399 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:33:36.112415 kernel: ima: No architecture policies found Dec 13 14:33:36.112429 kernel: clk: Disabling unused clocks Dec 13 14:33:36.112444 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:33:36.112459 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:33:36.112474 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:33:36.112490 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:33:36.112509 kernel: Run /init as init process Dec 13 14:33:36.112615 kernel: with arguments: Dec 13 14:33:36.112631 kernel: /init Dec 13 14:33:36.112645 kernel: with environment: Dec 13 14:33:36.112661 kernel: HOME=/ Dec 13 14:33:36.112675 kernel: TERM=linux Dec 13 14:33:36.112690 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:33:36.112710 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:33:36.112811 systemd[1]: Detected virtualization amazon. Dec 13 14:33:36.112828 systemd[1]: Detected architecture x86-64. Dec 13 14:33:36.112845 systemd[1]: Running in initrd. Dec 13 14:33:36.112861 systemd[1]: No hostname configured, using default hostname. Dec 13 14:33:36.112894 systemd[1]: Hostname set to . Dec 13 14:33:36.112919 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:33:36.112935 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:33:36.112952 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:33:36.112969 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:33:36.112986 systemd[1]: Reached target cryptsetup.target. Dec 13 14:33:36.113002 systemd[1]: Reached target paths.target. Dec 13 14:33:36.113018 systemd[1]: Reached target slices.target. Dec 13 14:33:36.113035 systemd[1]: Reached target swap.target. Dec 13 14:33:36.113052 systemd[1]: Reached target timers.target. Dec 13 14:33:36.113073 systemd[1]: Listening on iscsid.socket. Dec 13 14:33:36.113090 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:33:36.113106 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:33:36.113123 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:33:36.113139 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:33:36.113158 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:33:36.113175 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:33:36.113192 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:33:36.113213 systemd[1]: Reached target sockets.target. Dec 13 14:33:36.113229 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:33:36.113246 systemd[1]: Finished network-cleanup.service. Dec 13 14:33:36.113263 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:33:36.113280 systemd[1]: Starting systemd-journald.service... Dec 13 14:33:36.113308 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:33:36.113322 systemd[1]: Starting systemd-resolved.service... Dec 13 14:33:36.113338 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:33:36.113363 systemd-journald[185]: Journal started Dec 13 14:33:36.113449 systemd-journald[185]: Runtime Journal (/run/log/journal/ec234243f45a6893818ec96dc839f67c) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:33:36.125328 systemd[1]: Started systemd-journald.service. Dec 13 14:33:36.126347 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 14:33:36.322623 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:33:36.322654 kernel: Bridge firewalling registered Dec 13 14:33:36.322669 kernel: SCSI subsystem initialized Dec 13 14:33:36.322679 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:33:36.322694 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:33:36.322707 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:33:36.322718 kernel: audit: type=1130 audit(1734100416.317:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.180045 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 14:33:36.195833 systemd-resolved[187]: Positive Trust Anchors: Dec 13 14:33:36.195847 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:33:36.195904 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:33:36.204199 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 14:33:36.226868 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 14:33:36.318622 systemd[1]: Started systemd-resolved.service. Dec 13 14:33:36.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.341537 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:33:36.346146 kernel: audit: type=1130 audit(1734100416.340:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.344707 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:33:36.352525 kernel: audit: type=1130 audit(1734100416.344:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.350909 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:33:36.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.352784 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:33:36.371548 kernel: audit: type=1130 audit(1734100416.350:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.371583 kernel: audit: type=1130 audit(1734100416.352:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.373607 systemd[1]: Reached target nss-lookup.target. Dec 13 14:33:36.377463 kernel: audit: type=1130 audit(1734100416.372:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.380334 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:33:36.383138 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:33:36.384938 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:33:36.400106 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:33:36.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.405316 kernel: audit: type=1130 audit(1734100416.399:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.414633 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:33:36.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.428330 kernel: audit: type=1130 audit(1734100416.414:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.431972 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:33:36.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.434348 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:33:36.440058 kernel: audit: type=1130 audit(1734100416.432:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.446777 dracut-cmdline[206]: dracut-dracut-053 Dec 13 14:33:36.449437 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:33:36.535344 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:33:36.557328 kernel: iscsi: registered transport (tcp) Dec 13 14:33:36.587541 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:33:36.587627 kernel: QLogic iSCSI HBA Driver Dec 13 14:33:36.622525 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:33:36.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:36.625820 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:33:36.686368 kernel: raid6: avx512x4 gen() 15736 MB/s Dec 13 14:33:36.704353 kernel: raid6: avx512x4 xor() 6391 MB/s Dec 13 14:33:36.721378 kernel: raid6: avx512x2 gen() 17412 MB/s Dec 13 14:33:36.738351 kernel: raid6: avx512x2 xor() 23576 MB/s Dec 13 14:33:36.755353 kernel: raid6: avx512x1 gen() 15763 MB/s Dec 13 14:33:36.772352 kernel: raid6: avx512x1 xor() 20964 MB/s Dec 13 14:33:36.789356 kernel: raid6: avx2x4 gen() 16812 MB/s Dec 13 14:33:36.806350 kernel: raid6: avx2x4 xor() 6118 MB/s Dec 13 14:33:36.823342 kernel: raid6: avx2x2 gen() 17564 MB/s Dec 13 14:33:36.841362 kernel: raid6: avx2x2 xor() 17788 MB/s Dec 13 14:33:36.858359 kernel: raid6: avx2x1 gen() 12679 MB/s Dec 13 14:33:36.875358 kernel: raid6: avx2x1 xor() 14230 MB/s Dec 13 14:33:36.892355 kernel: raid6: sse2x4 gen() 9146 MB/s Dec 13 14:33:36.909390 kernel: raid6: sse2x4 xor() 5113 MB/s Dec 13 14:33:36.926355 kernel: raid6: sse2x2 gen() 10166 MB/s Dec 13 14:33:36.943363 kernel: raid6: sse2x2 xor() 5000 MB/s Dec 13 14:33:36.960361 kernel: raid6: sse2x1 gen() 8091 MB/s Dec 13 14:33:36.978163 kernel: raid6: sse2x1 xor() 4486 MB/s Dec 13 14:33:36.978417 kernel: raid6: using algorithm avx2x2 gen() 17564 MB/s Dec 13 14:33:36.978438 kernel: raid6: .... xor() 17788 MB/s, rmw enabled Dec 13 14:33:36.979448 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:33:36.994330 kernel: xor: automatically using best checksumming function avx Dec 13 14:33:37.111630 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:33:37.152321 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:33:37.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:37.153000 audit: BPF prog-id=7 op=LOAD Dec 13 14:33:37.153000 audit: BPF prog-id=8 op=LOAD Dec 13 14:33:37.155053 systemd[1]: Starting systemd-udevd.service... Dec 13 14:33:37.179377 systemd-udevd[384]: Using default interface naming scheme 'v252'. Dec 13 14:33:37.185933 systemd[1]: Started systemd-udevd.service. Dec 13 14:33:37.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:37.190345 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:33:37.215984 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Dec 13 14:33:37.257621 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:33:37.260649 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:33:37.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:37.334578 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:33:37.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:37.424323 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:33:37.456641 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:33:37.456723 kernel: AES CTR mode by8 optimization enabled Dec 13 14:33:37.476512 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:33:37.482055 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:33:37.483021 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 14:33:37.483289 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:1b:2f:38:a9:11 Dec 13 14:33:37.486226 (udev-worker)[433]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:33:37.736163 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:33:37.736400 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 14:33:37.736414 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:33:37.736513 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:33:37.736526 kernel: GPT:9289727 != 16777215 Dec 13 14:33:37.736536 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:33:37.736547 kernel: GPT:9289727 != 16777215 Dec 13 14:33:37.736561 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:33:37.736572 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:33:37.736582 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (427) Dec 13 14:33:37.669613 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:33:37.744048 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:33:37.779215 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:33:37.789588 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:33:37.789745 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:33:37.795524 systemd[1]: Starting disk-uuid.service... Dec 13 14:33:37.803063 disk-uuid[586]: Primary Header is updated. Dec 13 14:33:37.803063 disk-uuid[586]: Secondary Entries is updated. Dec 13 14:33:37.803063 disk-uuid[586]: Secondary Header is updated. Dec 13 14:33:37.809344 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:33:37.817353 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:33:37.823330 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:33:38.827346 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:33:38.832354 disk-uuid[587]: The operation has completed successfully. Dec 13 14:33:39.034756 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:33:39.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.034988 systemd[1]: Finished disk-uuid.service. Dec 13 14:33:39.056544 systemd[1]: Starting verity-setup.service... Dec 13 14:33:39.080323 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:33:39.193461 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:33:39.197404 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:33:39.201661 systemd[1]: Finished verity-setup.service. Dec 13 14:33:39.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.315520 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:33:39.316229 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:33:39.317326 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:33:39.318122 systemd[1]: Starting ignition-setup.service... Dec 13 14:33:39.326162 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:33:39.358036 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:33:39.358104 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:33:39.358116 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:33:39.382320 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:33:39.400625 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:33:39.417336 systemd[1]: Finished ignition-setup.service. Dec 13 14:33:39.418000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.421958 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:33:39.463814 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:33:39.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.466000 audit: BPF prog-id=9 op=LOAD Dec 13 14:33:39.467571 systemd[1]: Starting systemd-networkd.service... Dec 13 14:33:39.505068 systemd-networkd[1098]: lo: Link UP Dec 13 14:33:39.505083 systemd-networkd[1098]: lo: Gained carrier Dec 13 14:33:39.507093 systemd-networkd[1098]: Enumeration completed Dec 13 14:33:39.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.507365 systemd[1]: Started systemd-networkd.service. Dec 13 14:33:39.508729 systemd[1]: Reached target network.target. Dec 13 14:33:39.512898 systemd[1]: Starting iscsiuio.service... Dec 13 14:33:39.515213 systemd-networkd[1098]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:33:39.522882 systemd-networkd[1098]: eth0: Link UP Dec 13 14:33:39.522896 systemd-networkd[1098]: eth0: Gained carrier Dec 13 14:33:39.523285 systemd[1]: Started iscsiuio.service. Dec 13 14:33:39.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.526022 systemd[1]: Starting iscsid.service... Dec 13 14:33:39.532338 iscsid[1103]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:33:39.532338 iscsid[1103]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:33:39.532338 iscsid[1103]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:33:39.532338 iscsid[1103]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:33:39.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.545604 iscsid[1103]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:33:39.545604 iscsid[1103]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:33:39.534873 systemd[1]: Started iscsid.service. Dec 13 14:33:39.542523 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:33:39.588744 systemd-networkd[1098]: eth0: DHCPv4 address 172.31.23.152/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:33:39.608226 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:33:39.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.608522 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:33:39.611015 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:33:39.612232 systemd[1]: Reached target remote-fs.target. Dec 13 14:33:39.615033 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:33:39.628242 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:33:39.628000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.869606 ignition[1055]: Ignition 2.14.0 Dec 13 14:33:39.869622 ignition[1055]: Stage: fetch-offline Dec 13 14:33:39.869802 ignition[1055]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:39.869849 ignition[1055]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:39.909804 ignition[1055]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:39.910597 ignition[1055]: Ignition finished successfully Dec 13 14:33:39.926008 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:33:39.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:39.929510 systemd[1]: Starting ignition-fetch.service... Dec 13 14:33:39.945133 ignition[1122]: Ignition 2.14.0 Dec 13 14:33:39.945144 ignition[1122]: Stage: fetch Dec 13 14:33:39.945332 ignition[1122]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:39.945356 ignition[1122]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:39.956206 ignition[1122]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:39.961028 ignition[1122]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:40.009501 ignition[1122]: INFO : PUT result: OK Dec 13 14:33:40.021317 ignition[1122]: DEBUG : parsed url from cmdline: "" Dec 13 14:33:40.021317 ignition[1122]: INFO : no config URL provided Dec 13 14:33:40.021317 ignition[1122]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:33:40.025389 ignition[1122]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:33:40.025389 ignition[1122]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:40.025389 ignition[1122]: INFO : PUT result: OK Dec 13 14:33:40.029028 ignition[1122]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:33:40.030734 ignition[1122]: INFO : GET result: OK Dec 13 14:33:40.030734 ignition[1122]: DEBUG : parsing config with SHA512: 6ff3f573e047fc540c79f025218ab39e69f6f3325ce37d8f0608b3fda8fa7b82dc4ee65be9be9afd81983ec02db4d51c71a1240fce9db220609e0c22a680d60e Dec 13 14:33:40.032083 unknown[1122]: fetched base config from "system" Dec 13 14:33:40.033148 ignition[1122]: fetch: fetch complete Dec 13 14:33:40.032090 unknown[1122]: fetched base config from "system" Dec 13 14:33:40.033155 ignition[1122]: fetch: fetch passed Dec 13 14:33:40.032095 unknown[1122]: fetched user config from "aws" Dec 13 14:33:40.033308 ignition[1122]: Ignition finished successfully Dec 13 14:33:40.044613 systemd[1]: Finished ignition-fetch.service. Dec 13 14:33:40.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.059382 systemd[1]: Starting ignition-kargs.service... Dec 13 14:33:40.074415 ignition[1128]: Ignition 2.14.0 Dec 13 14:33:40.074424 ignition[1128]: Stage: kargs Dec 13 14:33:40.074671 ignition[1128]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:40.074696 ignition[1128]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:40.090800 ignition[1128]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:40.093038 ignition[1128]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:40.100549 ignition[1128]: INFO : PUT result: OK Dec 13 14:33:40.103679 ignition[1128]: kargs: kargs passed Dec 13 14:33:40.103744 ignition[1128]: Ignition finished successfully Dec 13 14:33:40.108744 systemd[1]: Finished ignition-kargs.service. Dec 13 14:33:40.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.114616 systemd[1]: Starting ignition-disks.service... Dec 13 14:33:40.130940 ignition[1134]: Ignition 2.14.0 Dec 13 14:33:40.130955 ignition[1134]: Stage: disks Dec 13 14:33:40.131433 ignition[1134]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:40.131470 ignition[1134]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:40.148207 ignition[1134]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:40.149885 ignition[1134]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:40.152620 ignition[1134]: INFO : PUT result: OK Dec 13 14:33:40.163772 ignition[1134]: disks: disks passed Dec 13 14:33:40.164135 ignition[1134]: Ignition finished successfully Dec 13 14:33:40.168130 systemd[1]: Finished ignition-disks.service. Dec 13 14:33:40.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.171010 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:33:40.174055 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:33:40.176309 systemd[1]: Reached target local-fs.target. Dec 13 14:33:40.178083 systemd[1]: Reached target sysinit.target. Dec 13 14:33:40.180979 systemd[1]: Reached target basic.target. Dec 13 14:33:40.184702 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:33:40.226473 systemd-fsck[1142]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:33:40.236332 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:33:40.252934 kernel: kauditd_printk_skb: 22 callbacks suppressed Dec 13 14:33:40.252962 kernel: audit: type=1130 audit(1734100420.246:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.248080 systemd[1]: Mounting sysroot.mount... Dec 13 14:33:40.274319 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:33:40.275673 systemd[1]: Mounted sysroot.mount. Dec 13 14:33:40.277618 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:33:40.291554 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:33:40.296117 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:33:40.297251 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:33:40.297309 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:33:40.313435 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:33:40.326898 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:33:40.330835 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:33:40.345025 initrd-setup-root[1164]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:33:40.360904 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1159) Dec 13 14:33:40.374249 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:33:40.374351 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:33:40.374372 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:33:40.374997 initrd-setup-root[1172]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:33:40.395472 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:33:40.398806 initrd-setup-root[1198]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:33:40.412264 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:33:40.424340 initrd-setup-root[1206]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:33:40.546499 systemd-networkd[1098]: eth0: Gained IPv6LL Dec 13 14:33:40.628386 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:33:40.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.633947 systemd[1]: Starting ignition-mount.service... Dec 13 14:33:40.652849 kernel: audit: type=1130 audit(1734100420.632:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.645756 systemd[1]: Starting sysroot-boot.service... Dec 13 14:33:40.658749 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 14:33:40.658898 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 14:33:40.704966 ignition[1225]: INFO : Ignition 2.14.0 Dec 13 14:33:40.706894 ignition[1225]: INFO : Stage: mount Dec 13 14:33:40.708192 ignition[1225]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:40.709708 ignition[1225]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:40.727798 systemd[1]: Finished sysroot-boot.service. Dec 13 14:33:40.745777 kernel: audit: type=1130 audit(1734100420.728:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.750879 ignition[1225]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:40.754220 ignition[1225]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:40.758266 ignition[1225]: INFO : PUT result: OK Dec 13 14:33:40.769966 ignition[1225]: INFO : mount: mount passed Dec 13 14:33:40.771197 ignition[1225]: INFO : Ignition finished successfully Dec 13 14:33:40.774444 systemd[1]: Finished ignition-mount.service. Dec 13 14:33:40.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.777218 systemd[1]: Starting ignition-files.service... Dec 13 14:33:40.781763 kernel: audit: type=1130 audit(1734100420.775:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:40.788236 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:33:40.815354 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1234) Dec 13 14:33:40.821253 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:33:40.821339 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:33:40.821357 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:33:40.835328 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:33:40.839363 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:33:40.874893 ignition[1253]: INFO : Ignition 2.14.0 Dec 13 14:33:40.874893 ignition[1253]: INFO : Stage: files Dec 13 14:33:40.877004 ignition[1253]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:40.877004 ignition[1253]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:40.899235 ignition[1253]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:40.902048 ignition[1253]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:40.908855 ignition[1253]: INFO : PUT result: OK Dec 13 14:33:40.919684 ignition[1253]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:33:40.946761 ignition[1253]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:33:40.946761 ignition[1253]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:33:40.964260 ignition[1253]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:33:40.977187 ignition[1253]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:33:40.985936 unknown[1253]: wrote ssh authorized keys file for user: core Dec 13 14:33:40.987646 ignition[1253]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:33:40.999257 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:33:41.001331 ignition[1253]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:33:41.025309 ignition[1253]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3831036978" Dec 13 14:33:41.035476 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1255) Dec 13 14:33:41.035531 ignition[1253]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3831036978": device or resource busy Dec 13 14:33:41.035531 ignition[1253]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3831036978", trying btrfs: device or resource busy Dec 13 14:33:41.035531 ignition[1253]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3831036978" Dec 13 14:33:41.048225 ignition[1253]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3831036978" Dec 13 14:33:41.050392 ignition[1253]: INFO : op(3): [started] unmounting "/mnt/oem3831036978" Dec 13 14:33:41.051762 ignition[1253]: INFO : op(3): [finished] unmounting "/mnt/oem3831036978" Dec 13 14:33:41.051762 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:33:41.051762 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:33:41.057066 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:33:41.061921 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:33:41.064465 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:33:41.064465 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:33:41.070065 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:33:41.070065 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:33:41.070065 ignition[1253]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:33:41.085865 ignition[1253]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3700521263" Dec 13 14:33:41.087815 ignition[1253]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3700521263": device or resource busy Dec 13 14:33:41.087815 ignition[1253]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3700521263", trying btrfs: device or resource busy Dec 13 14:33:41.087815 ignition[1253]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3700521263" Dec 13 14:33:41.100577 ignition[1253]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3700521263" Dec 13 14:33:41.100577 ignition[1253]: INFO : op(6): [started] unmounting "/mnt/oem3700521263" Dec 13 14:33:41.100577 ignition[1253]: INFO : op(6): [finished] unmounting "/mnt/oem3700521263" Dec 13 14:33:41.100577 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:33:41.100577 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:33:41.100577 ignition[1253]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:33:41.138559 ignition[1253]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem171226699" Dec 13 14:33:41.138559 ignition[1253]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem171226699": device or resource busy Dec 13 14:33:41.138559 ignition[1253]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem171226699", trying btrfs: device or resource busy Dec 13 14:33:41.138559 ignition[1253]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem171226699" Dec 13 14:33:41.155883 ignition[1253]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem171226699" Dec 13 14:33:41.155883 ignition[1253]: INFO : op(9): [started] unmounting "/mnt/oem171226699" Dec 13 14:33:41.155883 ignition[1253]: INFO : op(9): [finished] unmounting "/mnt/oem171226699" Dec 13 14:33:41.155883 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:33:41.155883 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:33:41.155883 ignition[1253]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:33:41.198147 ignition[1253]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem403123081" Dec 13 14:33:41.198147 ignition[1253]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem403123081": device or resource busy Dec 13 14:33:41.198147 ignition[1253]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem403123081", trying btrfs: device or resource busy Dec 13 14:33:41.198147 ignition[1253]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem403123081" Dec 13 14:33:41.198147 ignition[1253]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem403123081" Dec 13 14:33:41.198147 ignition[1253]: INFO : op(c): [started] unmounting "/mnt/oem403123081" Dec 13 14:33:41.198147 ignition[1253]: INFO : op(c): [finished] unmounting "/mnt/oem403123081" Dec 13 14:33:41.223872 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:33:41.223872 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:33:41.223872 ignition[1253]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 14:33:41.823051 ignition[1253]: INFO : GET result: OK Dec 13 14:33:42.652900 ignition[1253]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:33:42.652900 ignition[1253]: INFO : files: op(b): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:33:42.652900 ignition[1253]: INFO : files: op(b): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:33:42.652900 ignition[1253]: INFO : files: op(c): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:33:42.665943 ignition[1253]: INFO : files: op(c): op(d): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:33:42.665943 ignition[1253]: INFO : files: op(c): op(d): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:33:42.665943 ignition[1253]: INFO : files: op(c): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:33:42.665943 ignition[1253]: INFO : files: op(e): [started] processing unit "nvidia.service" Dec 13 14:33:42.665943 ignition[1253]: INFO : files: op(e): [finished] processing unit "nvidia.service" Dec 13 14:33:42.665943 ignition[1253]: INFO : files: op(f): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:33:42.665943 ignition[1253]: INFO : files: op(f): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:33:42.665943 ignition[1253]: INFO : files: op(10): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:33:42.665943 ignition[1253]: INFO : files: op(10): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:33:42.665943 ignition[1253]: INFO : files: op(11): [started] setting preset to enabled for "nvidia.service" Dec 13 14:33:42.665943 ignition[1253]: INFO : files: op(11): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:33:42.745334 ignition[1253]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:33:42.745334 ignition[1253]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:33:42.745334 ignition[1253]: INFO : files: files passed Dec 13 14:33:42.745334 ignition[1253]: INFO : Ignition finished successfully Dec 13 14:33:42.789368 kernel: audit: type=1130 audit(1734100422.767:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:42.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:42.754220 systemd[1]: Finished ignition-files.service. Dec 13 14:33:42.797121 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:33:42.816186 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:33:42.817568 systemd[1]: Starting ignition-quench.service... Dec 13 14:33:42.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:42.828405 initrd-setup-root-after-ignition[1276]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:33:42.833246 kernel: audit: type=1130 audit(1734100422.823:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:42.819142 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:33:42.824466 systemd[1]: Reached target ignition-complete.target. Dec 13 14:33:42.830333 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:33:42.841492 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:33:42.841701 systemd[1]: Finished ignition-quench.service. Dec 13 14:33:42.856030 kernel: audit: type=1130 audit(1734100422.842:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:42.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:42.865556 kernel: audit: type=1131 audit(1734100422.842:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:42.842000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:42.906091 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:33:42.906459 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:33:42.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:42.918356 systemd[1]: Reached target initrd-fs.target. Dec 13 14:33:42.935165 kernel: audit: type=1130 audit(1734100422.917:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:42.935211 kernel: audit: type=1131 audit(1734100422.917:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:42.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:42.935632 systemd[1]: Reached target initrd.target. Dec 13 14:33:42.938870 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:33:42.948832 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:33:43.000451 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:33:43.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.010768 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:33:43.060901 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:33:43.064249 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:33:43.066650 systemd[1]: Stopped target timers.target. Dec 13 14:33:43.069505 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:33:43.071438 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:33:43.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.080157 systemd[1]: Stopped target initrd.target. Dec 13 14:33:43.082754 systemd[1]: Stopped target basic.target. Dec 13 14:33:43.085204 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:33:43.092191 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:33:43.094334 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:33:43.115719 systemd[1]: Stopped target remote-fs.target. Dec 13 14:33:43.121649 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:33:43.128434 systemd[1]: Stopped target sysinit.target. Dec 13 14:33:43.136320 systemd[1]: Stopped target local-fs.target. Dec 13 14:33:43.141972 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:33:43.148402 systemd[1]: Stopped target swap.target. Dec 13 14:33:43.151644 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:33:43.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.151805 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:33:43.154753 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:33:43.160456 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:33:43.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.161196 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:33:43.166580 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:33:43.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.166756 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:33:43.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.176437 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:33:43.227115 ignition[1291]: INFO : Ignition 2.14.0 Dec 13 14:33:43.227115 ignition[1291]: INFO : Stage: umount Dec 13 14:33:43.227115 ignition[1291]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:33:43.227115 ignition[1291]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:33:43.227115 ignition[1291]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:33:43.227115 ignition[1291]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:33:43.227115 ignition[1291]: INFO : PUT result: OK Dec 13 14:33:43.244010 iscsid[1103]: iscsid shutting down. Dec 13 14:33:43.176595 systemd[1]: Stopped ignition-files.service. Dec 13 14:33:43.245994 ignition[1291]: INFO : umount: umount passed Dec 13 14:33:43.245994 ignition[1291]: INFO : Ignition finished successfully Dec 13 14:33:43.182940 systemd[1]: Stopping ignition-mount.service... Dec 13 14:33:43.225951 systemd[1]: Stopping iscsid.service... Dec 13 14:33:43.232642 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:33:43.254097 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:33:43.255582 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:33:43.257867 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:33:43.257000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.259494 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:33:43.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.266040 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:33:43.267394 systemd[1]: Stopped iscsid.service. Dec 13 14:33:43.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.271491 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:33:43.272692 systemd[1]: Stopped ignition-mount.service. Dec 13 14:33:43.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.276816 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:33:43.281248 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:33:43.281412 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:33:43.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.291000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.297179 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:33:43.297567 systemd[1]: Stopped ignition-disks.service. Dec 13 14:33:43.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.303603 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:33:43.303970 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:33:43.318468 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:33:43.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.336280 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:33:43.345830 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:33:43.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.345918 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:33:43.356746 systemd[1]: Stopped target paths.target. Dec 13 14:33:43.360056 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:33:43.362116 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:33:43.364741 systemd[1]: Stopped target slices.target. Dec 13 14:33:43.377911 systemd[1]: Stopped target sockets.target. Dec 13 14:33:43.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.378655 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:33:43.378725 systemd[1]: Closed iscsid.socket. Dec 13 14:33:43.392388 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:33:43.392475 systemd[1]: Stopped ignition-setup.service. Dec 13 14:33:43.393936 systemd[1]: Stopping iscsiuio.service... Dec 13 14:33:43.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.403186 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:33:43.403289 systemd[1]: Stopped iscsiuio.service. Dec 13 14:33:43.407934 systemd[1]: Stopped target network.target. Dec 13 14:33:43.409053 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:33:43.409095 systemd[1]: Closed iscsiuio.socket. Dec 13 14:33:43.410362 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:33:43.414112 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:33:43.417356 systemd-networkd[1098]: eth0: DHCPv6 lease lost Dec 13 14:33:43.422179 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:33:43.422313 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:33:43.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.432714 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:33:43.438033 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:33:43.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.444103 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:33:43.447289 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:33:43.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.449000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:33:43.449000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:33:43.450145 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:33:43.450205 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:33:43.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.453354 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:33:43.453436 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:33:43.461936 systemd[1]: Stopping network-cleanup.service... Dec 13 14:33:43.479047 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:33:43.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.480177 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:33:43.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.490985 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:33:43.491144 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:33:43.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.501718 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:33:43.501800 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:33:43.510129 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:33:43.538981 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:33:43.544812 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:33:43.545058 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:33:43.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.560415 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:33:43.560753 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:33:43.574916 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:33:43.575329 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:33:43.583839 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:33:43.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.584069 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:33:43.596164 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:33:43.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.596721 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:33:43.604983 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:33:43.605056 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:33:43.626357 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:33:43.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.666404 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 14:33:43.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.666534 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 14:33:43.675078 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:33:43.692000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.675151 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:33:43.684775 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:33:43.684852 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:33:43.699593 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 14:33:43.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.700968 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:33:43.701085 systemd[1]: Stopped network-cleanup.service. Dec 13 14:33:43.724471 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:33:43.726123 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:33:43.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:43.728973 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:33:43.729990 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:33:43.765905 systemd[1]: Switching root. Dec 13 14:33:43.793098 systemd-journald[185]: Journal stopped Dec 13 14:33:49.167305 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 14:33:49.167942 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:33:49.167963 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:33:49.167977 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:33:49.167989 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:33:49.168001 kernel: SELinux: policy capability open_perms=1 Dec 13 14:33:49.168013 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:33:49.168025 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:33:49.168037 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:33:49.168048 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:33:49.168061 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:33:49.168075 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:33:49.168087 systemd[1]: Successfully loaded SELinux policy in 126.029ms. Dec 13 14:33:49.168118 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 15.709ms. Dec 13 14:33:49.168140 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:33:49.168153 systemd[1]: Detected virtualization amazon. Dec 13 14:33:49.168165 systemd[1]: Detected architecture x86-64. Dec 13 14:33:49.168179 systemd[1]: Detected first boot. Dec 13 14:33:49.168192 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:33:49.168204 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:33:49.168218 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:33:49.168231 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:33:49.168245 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:33:49.168260 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:33:49.168273 kernel: kauditd_printk_skb: 53 callbacks suppressed Dec 13 14:33:49.168285 kernel: audit: type=1334 audit(1734100428.835:89): prog-id=12 op=LOAD Dec 13 14:33:49.168329 kernel: audit: type=1334 audit(1734100428.835:90): prog-id=3 op=UNLOAD Dec 13 14:33:49.168340 kernel: audit: type=1334 audit(1734100428.836:91): prog-id=13 op=LOAD Dec 13 14:33:49.168351 kernel: audit: type=1334 audit(1734100428.837:92): prog-id=14 op=LOAD Dec 13 14:33:49.168363 kernel: audit: type=1334 audit(1734100428.837:93): prog-id=4 op=UNLOAD Dec 13 14:33:49.168373 kernel: audit: type=1334 audit(1734100428.837:94): prog-id=5 op=UNLOAD Dec 13 14:33:49.168384 kernel: audit: type=1334 audit(1734100428.842:95): prog-id=15 op=LOAD Dec 13 14:33:49.168395 kernel: audit: type=1334 audit(1734100428.842:96): prog-id=12 op=UNLOAD Dec 13 14:33:49.168409 kernel: audit: type=1334 audit(1734100428.843:97): prog-id=16 op=LOAD Dec 13 14:33:49.168420 kernel: audit: type=1334 audit(1734100428.844:98): prog-id=17 op=LOAD Dec 13 14:33:49.168431 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:33:49.168443 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:33:49.168454 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:33:49.168467 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:33:49.168481 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:33:49.168494 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:33:49.168508 systemd[1]: Created slice system-getty.slice. Dec 13 14:33:49.168520 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:33:49.168531 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:33:49.168544 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:33:49.168555 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:33:49.168569 systemd[1]: Created slice user.slice. Dec 13 14:33:49.168580 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:33:49.168592 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:33:49.168607 systemd[1]: Set up automount boot.automount. Dec 13 14:33:49.168619 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:33:49.168632 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:33:49.168644 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:33:49.168655 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:33:49.168667 systemd[1]: Reached target integritysetup.target. Dec 13 14:33:49.168679 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:33:49.168691 systemd[1]: Reached target remote-fs.target. Dec 13 14:33:49.168703 systemd[1]: Reached target slices.target. Dec 13 14:33:49.168714 systemd[1]: Reached target swap.target. Dec 13 14:33:49.168729 systemd[1]: Reached target torcx.target. Dec 13 14:33:49.168740 systemd[1]: Reached target veritysetup.target. Dec 13 14:33:49.168752 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:33:49.168891 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:33:49.168908 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:33:49.168921 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:33:49.168933 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:33:49.168949 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:33:49.168962 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:33:49.168977 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:33:49.168989 systemd[1]: Mounting media.mount... Dec 13 14:33:49.169001 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:49.169013 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:33:49.169024 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:33:49.169036 systemd[1]: Mounting tmp.mount... Dec 13 14:33:49.169048 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:33:49.169059 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:33:49.169071 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:33:49.169085 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:33:49.169096 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:33:49.169108 systemd[1]: Starting modprobe@drm.service... Dec 13 14:33:49.169120 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:33:49.169132 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:33:49.169143 systemd[1]: Starting modprobe@loop.service... Dec 13 14:33:49.169156 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:33:49.169168 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:33:49.169179 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:33:49.169194 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:33:49.169207 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:33:49.169220 systemd[1]: Stopped systemd-journald.service. Dec 13 14:33:49.169231 systemd[1]: Starting systemd-journald.service... Dec 13 14:33:49.169243 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:33:49.169254 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:33:49.169266 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:33:49.169278 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:33:49.169306 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:33:49.169321 systemd[1]: Stopped verity-setup.service. Dec 13 14:33:49.169333 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:49.169345 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:33:49.169356 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:33:49.169368 systemd[1]: Mounted media.mount. Dec 13 14:33:49.169392 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:33:49.169405 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:33:49.169418 systemd[1]: Mounted tmp.mount. Dec 13 14:33:49.169431 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:33:49.169443 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:33:49.169455 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:33:49.169467 kernel: loop: module loaded Dec 13 14:33:49.169479 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:33:49.169491 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:33:49.169505 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:33:49.169517 systemd[1]: Finished modprobe@drm.service. Dec 13 14:33:49.169529 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:33:49.169541 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:33:49.169554 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:33:49.169568 systemd[1]: Finished modprobe@loop.service. Dec 13 14:33:49.169581 kernel: fuse: init (API version 7.34) Dec 13 14:33:49.169592 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:33:49.169611 systemd-journald[1398]: Journal started Dec 13 14:33:49.169678 systemd-journald[1398]: Runtime Journal (/run/log/journal/ec234243f45a6893818ec96dc839f67c) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:33:44.305000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:33:44.427000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:33:44.428000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:33:44.428000 audit: BPF prog-id=10 op=LOAD Dec 13 14:33:44.428000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:33:44.428000 audit: BPF prog-id=11 op=LOAD Dec 13 14:33:44.428000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:33:44.672000 audit[1326]: AVC avc: denied { associate } for pid=1326 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:33:44.672000 audit[1326]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=1309 pid=1326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:44.672000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:33:44.674000 audit[1326]: AVC avc: denied { associate } for pid=1326 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:33:44.674000 audit[1326]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=1309 pid=1326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:44.674000 audit: CWD cwd="/" Dec 13 14:33:44.674000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:49.178659 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:33:49.178709 systemd[1]: Started systemd-journald.service. Dec 13 14:33:44.674000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:44.674000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:33:48.835000 audit: BPF prog-id=12 op=LOAD Dec 13 14:33:48.835000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:33:48.836000 audit: BPF prog-id=13 op=LOAD Dec 13 14:33:48.837000 audit: BPF prog-id=14 op=LOAD Dec 13 14:33:48.837000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:33:48.837000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:33:48.842000 audit: BPF prog-id=15 op=LOAD Dec 13 14:33:48.842000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:33:48.843000 audit: BPF prog-id=16 op=LOAD Dec 13 14:33:48.844000 audit: BPF prog-id=17 op=LOAD Dec 13 14:33:48.844000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:33:48.844000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:33:48.847000 audit: BPF prog-id=18 op=LOAD Dec 13 14:33:48.847000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:33:48.851000 audit: BPF prog-id=19 op=LOAD Dec 13 14:33:48.851000 audit: BPF prog-id=20 op=LOAD Dec 13 14:33:48.851000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:33:48.851000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:33:48.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:48.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:48.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:48.859000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:33:49.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.059000 audit: BPF prog-id=21 op=LOAD Dec 13 14:33:49.059000 audit: BPF prog-id=22 op=LOAD Dec 13 14:33:49.059000 audit: BPF prog-id=23 op=LOAD Dec 13 14:33:49.060000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:33:49.060000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:33:49.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.164000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:33:49.164000 audit[1398]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdba4207f0 a2=4000 a3=7ffdba42088c items=0 ppid=1 pid=1398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:49.164000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:33:49.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:48.834146 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:33:49.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:44.668311 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:33:48.851943 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:33:44.669449 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:33:49.178346 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:33:44.669470 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:33:49.181599 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:33:44.669505 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:33:49.183110 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:33:44.669515 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:33:49.184678 systemd[1]: Reached target network-pre.target. Dec 13 14:33:44.669553 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:33:49.192393 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:33:44.669568 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:33:49.195856 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:33:44.670530 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:33:49.200787 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:33:44.670583 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:33:44.670597 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:33:44.671761 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:33:44.671798 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:33:44.671819 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:33:44.671834 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:33:44.671852 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:33:44.671865 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:44Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:33:48.186192 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:48Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:33:49.204925 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:33:48.186664 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:48Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:33:48.186788 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:48Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:33:48.190699 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:48Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:33:48.192149 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:48Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:33:48.193323 /usr/lib/systemd/system-generators/torcx-generator[1326]: time="2024-12-13T14:33:48Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:33:49.210010 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:33:49.215118 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:33:49.220335 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:33:49.221440 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:33:49.224053 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:33:49.228397 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:33:49.231976 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:33:49.250949 systemd-journald[1398]: Time spent on flushing to /var/log/journal/ec234243f45a6893818ec96dc839f67c is 86.988ms for 1182 entries. Dec 13 14:33:49.250949 systemd-journald[1398]: System Journal (/var/log/journal/ec234243f45a6893818ec96dc839f67c) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:33:49.357179 systemd-journald[1398]: Received client request to flush runtime journal. Dec 13 14:33:49.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.265949 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:33:49.267787 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:33:49.300320 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:33:49.349383 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:33:49.352867 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:33:49.358563 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:33:49.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.376613 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:33:49.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.381192 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:33:49.389276 udevadm[1439]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 14:33:49.459999 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:33:49.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:49.463823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:33:49.557374 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:33:49.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:50.157940 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:33:50.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:50.159000 audit: BPF prog-id=24 op=LOAD Dec 13 14:33:50.159000 audit: BPF prog-id=25 op=LOAD Dec 13 14:33:50.159000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:33:50.159000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:33:50.160862 systemd[1]: Starting systemd-udevd.service... Dec 13 14:33:50.209358 systemd-udevd[1445]: Using default interface naming scheme 'v252'. Dec 13 14:33:50.255293 systemd[1]: Started systemd-udevd.service. Dec 13 14:33:50.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:50.257000 audit: BPF prog-id=26 op=LOAD Dec 13 14:33:50.259539 systemd[1]: Starting systemd-networkd.service... Dec 13 14:33:50.271000 audit: BPF prog-id=27 op=LOAD Dec 13 14:33:50.271000 audit: BPF prog-id=28 op=LOAD Dec 13 14:33:50.271000 audit: BPF prog-id=29 op=LOAD Dec 13 14:33:50.272949 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:33:50.351599 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:33:50.355189 (udev-worker)[1451]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:33:50.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:50.399188 systemd[1]: Started systemd-userdbd.service. Dec 13 14:33:50.475322 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 14:33:50.479321 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:33:50.482638 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 14:33:50.482729 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:33:50.497000 audit[1449]: AVC avc: denied { confidentiality } for pid=1449 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:33:50.589652 systemd-networkd[1453]: lo: Link UP Dec 13 14:33:50.589667 systemd-networkd[1453]: lo: Gained carrier Dec 13 14:33:50.590273 systemd-networkd[1453]: Enumeration completed Dec 13 14:33:50.590464 systemd[1]: Started systemd-networkd.service. Dec 13 14:33:50.590693 systemd-networkd[1453]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:33:50.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:50.593614 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:33:50.598070 systemd-networkd[1453]: eth0: Link UP Dec 13 14:33:50.598322 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:33:50.598556 systemd-networkd[1453]: eth0: Gained carrier Dec 13 14:33:50.497000 audit[1449]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55fa03334c70 a1=337fc a2=7f46cb17ebc5 a3=5 items=110 ppid=1445 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:50.497000 audit: CWD cwd="/" Dec 13 14:33:50.497000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=1 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=2 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=3 name=(null) inode=13186 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=4 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=5 name=(null) inode=13187 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=6 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=7 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=8 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=9 name=(null) inode=13189 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=10 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=11 name=(null) inode=13190 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=12 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=13 name=(null) inode=13191 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=14 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=15 name=(null) inode=13192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=16 name=(null) inode=13188 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=17 name=(null) inode=13193 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=18 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=19 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=20 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=21 name=(null) inode=13195 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=22 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=23 name=(null) inode=13196 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=24 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=25 name=(null) inode=13197 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=26 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=27 name=(null) inode=13198 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=28 name=(null) inode=13194 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=29 name=(null) inode=13199 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=30 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=31 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=32 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=33 name=(null) inode=13201 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=34 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=35 name=(null) inode=13202 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=36 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=37 name=(null) inode=13203 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=38 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=39 name=(null) inode=13204 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=40 name=(null) inode=13200 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=41 name=(null) inode=13205 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=42 name=(null) inode=13185 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=43 name=(null) inode=13206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=44 name=(null) inode=13206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=45 name=(null) inode=13207 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=46 name=(null) inode=13206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=47 name=(null) inode=13208 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=48 name=(null) inode=13206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.620588 systemd-networkd[1453]: eth0: DHCPv4 address 172.31.23.152/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:33:50.497000 audit: PATH item=49 name=(null) inode=13209 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=50 name=(null) inode=13206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=51 name=(null) inode=13210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=52 name=(null) inode=13206 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=53 name=(null) inode=13211 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=55 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=56 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=57 name=(null) inode=13213 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=58 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=59 name=(null) inode=13214 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=60 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=61 name=(null) inode=13215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=62 name=(null) inode=13215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=63 name=(null) inode=13216 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=64 name=(null) inode=13215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=65 name=(null) inode=13217 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=66 name=(null) inode=13215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=67 name=(null) inode=13218 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=68 name=(null) inode=13215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=69 name=(null) inode=13219 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=70 name=(null) inode=13215 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=71 name=(null) inode=13220 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=72 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=73 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=74 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=75 name=(null) inode=13222 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=76 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=77 name=(null) inode=13223 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=78 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=79 name=(null) inode=13224 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=80 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=81 name=(null) inode=13225 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=82 name=(null) inode=13221 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=83 name=(null) inode=13226 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=84 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=85 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=86 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=87 name=(null) inode=13228 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=88 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=89 name=(null) inode=13229 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=90 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=91 name=(null) inode=13230 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=92 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=93 name=(null) inode=13231 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=94 name=(null) inode=13227 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=95 name=(null) inode=13232 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=96 name=(null) inode=13212 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=97 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=98 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=99 name=(null) inode=13234 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=100 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=101 name=(null) inode=13235 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=102 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=103 name=(null) inode=13236 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=104 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=105 name=(null) inode=13237 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=106 name=(null) inode=13233 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=107 name=(null) inode=13238 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PATH item=109 name=(null) inode=13245 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:33:50.497000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:33:50.639331 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 14:33:50.648323 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 14:33:50.663358 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:33:50.729327 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1451) Dec 13 14:33:50.933171 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:33:50.982929 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:33:50.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:50.985544 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:33:51.073073 lvm[1559]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:33:51.117620 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:33:51.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:51.118887 systemd[1]: Reached target cryptsetup.target. Dec 13 14:33:51.121734 systemd[1]: Starting lvm2-activation.service... Dec 13 14:33:51.127086 lvm[1560]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:33:51.159991 systemd[1]: Finished lvm2-activation.service. Dec 13 14:33:51.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:51.161418 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:33:51.162484 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:33:51.162529 systemd[1]: Reached target local-fs.target. Dec 13 14:33:51.163940 systemd[1]: Reached target machines.target. Dec 13 14:33:51.166856 systemd[1]: Starting ldconfig.service... Dec 13 14:33:51.168954 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:33:51.169027 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:51.170860 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:33:51.173922 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:33:51.178871 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:33:51.185731 systemd[1]: Starting systemd-sysext.service... Dec 13 14:33:51.204735 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1562 (bootctl) Dec 13 14:33:51.206833 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:33:51.246114 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:33:51.262117 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:33:51.262405 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:33:51.292813 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 14:33:51.294816 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:33:51.294000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:51.416052 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:33:51.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:51.416977 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:33:51.443317 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:33:51.466324 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 14:33:51.471962 systemd-fsck[1572]: fsck.fat 4.2 (2021-01-31) Dec 13 14:33:51.471962 systemd-fsck[1572]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 14:33:51.475547 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:33:51.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:51.480199 systemd[1]: Mounting boot.mount... Dec 13 14:33:51.516636 systemd[1]: Mounted boot.mount. Dec 13 14:33:51.519614 (sd-sysext)[1576]: Using extensions 'kubernetes'. Dec 13 14:33:51.530394 (sd-sysext)[1576]: Merged extensions into '/usr'. Dec 13 14:33:51.558018 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:33:51.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:51.559688 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:51.561489 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:33:51.562788 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:33:51.564265 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:33:51.568045 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:33:51.572757 systemd[1]: Starting modprobe@loop.service... Dec 13 14:33:51.573741 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:33:51.573925 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:51.574066 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:51.578858 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:33:51.580212 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:33:51.580578 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:33:51.583096 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:33:51.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:51.581000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:51.583534 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:33:51.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:51.584000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:51.586784 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:33:51.587195 systemd[1]: Finished modprobe@loop.service. Dec 13 14:33:51.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:51.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:51.589918 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:33:51.590386 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:33:51.593579 systemd[1]: Finished systemd-sysext.service. Dec 13 14:33:51.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:51.598188 systemd[1]: Starting ensure-sysext.service... Dec 13 14:33:51.606397 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:33:51.626840 systemd[1]: Reloading. Dec 13 14:33:51.662249 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:33:51.665098 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:33:51.675575 systemd-tmpfiles[1595]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:33:51.787075 /usr/lib/systemd/system-generators/torcx-generator[1615]: time="2024-12-13T14:33:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:33:51.787112 /usr/lib/systemd/system-generators/torcx-generator[1615]: time="2024-12-13T14:33:51Z" level=info msg="torcx already run" Dec 13 14:33:51.963457 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:33:51.963482 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:33:51.991251 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:33:52.065429 systemd-networkd[1453]: eth0: Gained IPv6LL Dec 13 14:33:52.072000 audit: BPF prog-id=30 op=LOAD Dec 13 14:33:52.072000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:33:52.073000 audit: BPF prog-id=31 op=LOAD Dec 13 14:33:52.073000 audit: BPF prog-id=32 op=LOAD Dec 13 14:33:52.073000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:33:52.073000 audit: BPF prog-id=29 op=UNLOAD Dec 13 14:33:52.076000 audit: BPF prog-id=33 op=LOAD Dec 13 14:33:52.076000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:33:52.076000 audit: BPF prog-id=34 op=LOAD Dec 13 14:33:52.076000 audit: BPF prog-id=35 op=LOAD Dec 13 14:33:52.076000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:33:52.076000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:33:52.079000 audit: BPF prog-id=36 op=LOAD Dec 13 14:33:52.079000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:33:52.079000 audit: BPF prog-id=37 op=LOAD Dec 13 14:33:52.079000 audit: BPF prog-id=38 op=LOAD Dec 13 14:33:52.079000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:33:52.079000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:33:52.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.084109 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:33:52.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.085667 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:33:52.092578 systemd[1]: Starting audit-rules.service... Dec 13 14:33:52.096904 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:33:52.102547 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:33:52.108000 audit: BPF prog-id=39 op=LOAD Dec 13 14:33:52.115000 audit: BPF prog-id=40 op=LOAD Dec 13 14:33:52.112266 systemd[1]: Starting systemd-resolved.service... Dec 13 14:33:52.118866 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:33:52.124750 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:33:52.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.131693 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:33:52.138858 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:33:52.145000 audit[1674]: SYSTEM_BOOT pid=1674 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.156996 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:33:52.160893 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:33:52.165108 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:33:52.168831 systemd[1]: Starting modprobe@loop.service... Dec 13 14:33:52.169961 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:33:52.170189 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:52.170528 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:33:52.172859 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:33:52.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.178335 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:33:52.178537 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:33:52.179000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.180422 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:33:52.180634 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:33:52.180783 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:52.181030 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:33:52.187794 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:33:52.188396 systemd[1]: Finished modprobe@loop.service. Dec 13 14:33:52.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.190361 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:33:52.193685 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:33:52.198163 systemd[1]: Starting modprobe@drm.service... Dec 13 14:33:52.199118 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:33:52.199377 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:52.199621 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:33:52.201066 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:33:52.201390 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:33:52.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.203585 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:33:52.207842 systemd[1]: Finished ensure-sysext.service. Dec 13 14:33:52.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.217180 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:33:52.217458 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:33:52.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.218740 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:33:52.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.219000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.218881 systemd[1]: Finished modprobe@drm.service. Dec 13 14:33:52.220401 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:33:52.247250 ldconfig[1561]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:33:52.268446 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:52.268473 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:33:52.276101 systemd[1]: Finished ldconfig.service. Dec 13 14:33:52.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.309072 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:33:52.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.312211 systemd[1]: Starting systemd-update-done.service... Dec 13 14:33:52.330390 systemd[1]: Finished systemd-update-done.service. Dec 13 14:33:52.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:33:52.345282 systemd-resolved[1672]: Positive Trust Anchors: Dec 13 14:33:52.345336 systemd-resolved[1672]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:33:52.345378 systemd-resolved[1672]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:33:52.346000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:33:52.346000 audit[1694]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffa302ace0 a2=420 a3=0 items=0 ppid=1668 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:33:52.346000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:33:52.347037 augenrules[1694]: No rules Dec 13 14:33:52.348146 systemd[1]: Finished audit-rules.service. Dec 13 14:33:52.358214 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:33:52.359342 systemd[1]: Reached target time-set.target. Dec 13 14:33:52.378717 systemd-resolved[1672]: Defaulting to hostname 'linux'. Dec 13 14:33:52.380639 systemd[1]: Started systemd-resolved.service. Dec 13 14:33:52.381724 systemd[1]: Reached target network.target. Dec 13 14:33:52.382547 systemd[1]: Reached target network-online.target. Dec 13 14:33:52.383529 systemd[1]: Reached target nss-lookup.target. Dec 13 14:33:52.384375 systemd[1]: Reached target sysinit.target. Dec 13 14:33:52.385364 systemd[1]: Started motdgen.path. Dec 13 14:33:52.386115 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:33:52.387416 systemd[1]: Started logrotate.timer. Dec 13 14:33:52.388259 systemd[1]: Started mdadm.timer. Dec 13 14:33:52.388952 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:33:52.389813 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:33:52.389853 systemd[1]: Reached target paths.target. Dec 13 14:33:52.390611 systemd[1]: Reached target timers.target. Dec 13 14:33:52.391791 systemd[1]: Listening on dbus.socket. Dec 13 14:33:52.401198 systemd[1]: Starting docker.socket... Dec 13 14:33:52.410723 systemd[1]: Listening on sshd.socket. Dec 13 14:33:52.411757 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:52.414615 systemd[1]: Listening on docker.socket. Dec 13 14:33:52.415761 systemd[1]: Reached target sockets.target. Dec 13 14:33:52.417280 systemd[1]: Reached target basic.target. Dec 13 14:33:52.418209 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:33:52.418242 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:33:52.419688 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:33:52.422257 systemd[1]: Starting containerd.service... Dec 13 14:33:52.426701 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:33:52.438331 systemd[1]: Starting dbus.service... Dec 13 14:33:52.468771 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:33:52.478626 systemd[1]: Starting extend-filesystems.service... Dec 13 14:33:52.480251 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:33:52.485105 systemd[1]: Starting kubelet.service... Dec 13 14:33:52.500290 systemd[1]: Starting motdgen.service... Dec 13 14:33:52.506928 systemd[1]: Started nvidia.service. Dec 13 14:33:52.513342 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:33:52.517426 systemd[1]: Starting sshd-keygen.service... Dec 13 14:33:52.526565 systemd[1]: Starting systemd-logind.service... Dec 13 14:33:52.527657 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:33:52.527735 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:33:52.533119 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:33:52.535432 systemd[1]: Starting update-engine.service... Dec 13 14:33:52.545843 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:33:52.625902 jq[1716]: true Dec 13 14:33:52.623914 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:33:52.629358 jq[1706]: false Dec 13 14:33:52.624154 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:33:52.644581 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:33:52.644922 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:33:53.561536 systemd-timesyncd[1673]: Contacted time server 23.111.186.186:123 (0.flatcar.pool.ntp.org). Dec 13 14:33:53.561627 systemd-timesyncd[1673]: Initial clock synchronization to Fri 2024-12-13 14:33:53.561334 UTC. Dec 13 14:33:53.561697 systemd-resolved[1672]: Clock change detected. Flushing caches. Dec 13 14:33:53.587670 extend-filesystems[1708]: Found loop1 Dec 13 14:33:53.589481 extend-filesystems[1708]: Found nvme0n1 Dec 13 14:33:53.590383 extend-filesystems[1708]: Found nvme0n1p1 Dec 13 14:33:53.590383 extend-filesystems[1708]: Found nvme0n1p2 Dec 13 14:33:53.590383 extend-filesystems[1708]: Found nvme0n1p3 Dec 13 14:33:53.590383 extend-filesystems[1708]: Found usr Dec 13 14:33:53.590383 extend-filesystems[1708]: Found nvme0n1p4 Dec 13 14:33:53.590383 extend-filesystems[1708]: Found nvme0n1p6 Dec 13 14:33:53.590383 extend-filesystems[1708]: Found nvme0n1p7 Dec 13 14:33:53.590383 extend-filesystems[1708]: Found nvme0n1p9 Dec 13 14:33:53.590383 extend-filesystems[1708]: Checking size of /dev/nvme0n1p9 Dec 13 14:33:53.595183 dbus-daemon[1705]: [system] SELinux support is enabled Dec 13 14:33:53.595820 systemd[1]: Started dbus.service. Dec 13 14:33:53.598272 dbus-daemon[1705]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1453 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:33:53.601206 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:33:53.602162 dbus-daemon[1705]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:33:53.601243 systemd[1]: Reached target system-config.target. Dec 13 14:33:53.602572 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:33:53.602599 systemd[1]: Reached target user-config.target. Dec 13 14:33:53.614788 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:33:53.621267 extend-filesystems[1708]: Resized partition /dev/nvme0n1p9 Dec 13 14:33:53.628803 extend-filesystems[1749]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:33:53.636791 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:33:53.637065 systemd[1]: Finished motdgen.service. Dec 13 14:33:53.641040 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:33:53.641284 jq[1730]: true Dec 13 14:33:53.711025 amazon-ssm-agent[1702]: 2024/12/13 14:33:53 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:33:53.746513 update_engine[1715]: I1213 14:33:53.745596 1715 main.cc:92] Flatcar Update Engine starting Dec 13 14:33:53.752729 systemd[1]: Started update-engine.service. Dec 13 14:33:53.758049 update_engine[1715]: I1213 14:33:53.752806 1715 update_check_scheduler.cc:74] Next update check in 6m42s Dec 13 14:33:53.758164 amazon-ssm-agent[1702]: Initializing new seelog logger Dec 13 14:33:53.756449 systemd[1]: Started locksmithd.service. Dec 13 14:33:53.758384 amazon-ssm-agent[1702]: New Seelog Logger Creation Complete Dec 13 14:33:53.758495 amazon-ssm-agent[1702]: 2024/12/13 14:33:53 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:33:53.758495 amazon-ssm-agent[1702]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:33:53.759596 amazon-ssm-agent[1702]: 2024/12/13 14:33:53 processing appconfig overrides Dec 13 14:33:53.818406 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:33:53.842422 extend-filesystems[1749]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:33:53.842422 extend-filesystems[1749]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:33:53.842422 extend-filesystems[1749]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:33:53.848418 extend-filesystems[1708]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:33:53.843463 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:33:53.843680 systemd[1]: Finished extend-filesystems.service. Dec 13 14:33:53.852663 bash[1778]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:33:53.854293 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:33:53.904468 env[1718]: time="2024-12-13T14:33:53.904393257Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:33:53.967759 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:33:53.981152 systemd-logind[1714]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:33:53.981189 systemd-logind[1714]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 14:33:53.981214 systemd-logind[1714]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:33:53.984316 systemd-logind[1714]: New seat seat0. Dec 13 14:33:53.993076 systemd[1]: Started systemd-logind.service. Dec 13 14:33:54.018234 dbus-daemon[1705]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:33:54.018561 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:33:54.023713 dbus-daemon[1705]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1745 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:33:54.033326 systemd[1]: Starting polkit.service... Dec 13 14:33:54.063159 polkitd[1797]: Started polkitd version 121 Dec 13 14:33:54.085983 polkitd[1797]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:33:54.086071 polkitd[1797]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:33:54.115002 polkitd[1797]: Finished loading, compiling and executing 2 rules Dec 13 14:33:54.115841 dbus-daemon[1705]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:33:54.116275 systemd[1]: Started polkit.service. Dec 13 14:33:54.124963 polkitd[1797]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:33:54.160100 env[1718]: time="2024-12-13T14:33:54.159426492Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:33:54.160246 env[1718]: time="2024-12-13T14:33:54.160098518Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:54.171170 systemd-hostnamed[1745]: Hostname set to (transient) Dec 13 14:33:54.171574 systemd-resolved[1672]: System hostname changed to 'ip-172-31-23-152'. Dec 13 14:33:54.176622 env[1718]: time="2024-12-13T14:33:54.175314719Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:33:54.176622 env[1718]: time="2024-12-13T14:33:54.175375547Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:54.176622 env[1718]: time="2024-12-13T14:33:54.175714334Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:33:54.176622 env[1718]: time="2024-12-13T14:33:54.175738305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:54.176622 env[1718]: time="2024-12-13T14:33:54.175758674Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:33:54.176622 env[1718]: time="2024-12-13T14:33:54.175781340Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:54.176622 env[1718]: time="2024-12-13T14:33:54.175991074Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:54.176622 env[1718]: time="2024-12-13T14:33:54.176299522Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:33:54.176622 env[1718]: time="2024-12-13T14:33:54.176503443Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:33:54.176622 env[1718]: time="2024-12-13T14:33:54.176527399Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:33:54.177164 env[1718]: time="2024-12-13T14:33:54.176588393Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:33:54.177164 env[1718]: time="2024-12-13T14:33:54.176605308Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.198571113Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.198737664Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.198810400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.199071177Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.199101521Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.199164491Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.199186373Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.199246155Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.199267746Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.199327172Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.199348487Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.199406803Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.199745094Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:33:54.200517 env[1718]: time="2024-12-13T14:33:54.200073329Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:33:54.201828 env[1718]: time="2024-12-13T14:33:54.201014669Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:33:54.201828 env[1718]: time="2024-12-13T14:33:54.201098811Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:33:54.201828 env[1718]: time="2024-12-13T14:33:54.201141304Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:33:54.201828 env[1718]: time="2024-12-13T14:33:54.201293242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:33:54.201828 env[1718]: time="2024-12-13T14:33:54.201316589Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:33:54.201828 env[1718]: time="2024-12-13T14:33:54.201538837Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:33:54.201828 env[1718]: time="2024-12-13T14:33:54.201577043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:33:54.201828 env[1718]: time="2024-12-13T14:33:54.201626276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:33:54.201828 env[1718]: time="2024-12-13T14:33:54.201663920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:33:54.201828 env[1718]: time="2024-12-13T14:33:54.201709989Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:33:54.201828 env[1718]: time="2024-12-13T14:33:54.201745429Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:33:54.201828 env[1718]: time="2024-12-13T14:33:54.201806590Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:33:54.203000 env[1718]: time="2024-12-13T14:33:54.202232356Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:33:54.203000 env[1718]: time="2024-12-13T14:33:54.202300707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:33:54.203000 env[1718]: time="2024-12-13T14:33:54.202324397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:33:54.203000 env[1718]: time="2024-12-13T14:33:54.202402574Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:33:54.203000 env[1718]: time="2024-12-13T14:33:54.202673204Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:33:54.203000 env[1718]: time="2024-12-13T14:33:54.202746922Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:33:54.203000 env[1718]: time="2024-12-13T14:33:54.202775656Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:33:54.203000 env[1718]: time="2024-12-13T14:33:54.202892324Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:33:54.203478 env[1718]: time="2024-12-13T14:33:54.203285329Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:33:54.203478 env[1718]: time="2024-12-13T14:33:54.203376222Z" level=info msg="Connect containerd service" Dec 13 14:33:54.203478 env[1718]: time="2024-12-13T14:33:54.203450704Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:33:54.216989 env[1718]: time="2024-12-13T14:33:54.216293637Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:33:54.216989 env[1718]: time="2024-12-13T14:33:54.216696941Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:33:54.216989 env[1718]: time="2024-12-13T14:33:54.216751574Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:33:54.216945 systemd[1]: Started containerd.service. Dec 13 14:33:54.218276 env[1718]: time="2024-12-13T14:33:54.217074141Z" level=info msg="containerd successfully booted in 0.387962s" Dec 13 14:33:54.229286 env[1718]: time="2024-12-13T14:33:54.221024876Z" level=info msg="Start subscribing containerd event" Dec 13 14:33:54.249028 env[1718]: time="2024-12-13T14:33:54.248970627Z" level=info msg="Start recovering state" Dec 13 14:33:54.249171 env[1718]: time="2024-12-13T14:33:54.249120442Z" level=info msg="Start event monitor" Dec 13 14:33:54.249171 env[1718]: time="2024-12-13T14:33:54.249146479Z" level=info msg="Start snapshots syncer" Dec 13 14:33:54.249320 env[1718]: time="2024-12-13T14:33:54.249162193Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:33:54.249320 env[1718]: time="2024-12-13T14:33:54.249298013Z" level=info msg="Start streaming server" Dec 13 14:33:54.614802 coreos-metadata[1704]: Dec 13 14:33:54.604 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:33:54.619851 coreos-metadata[1704]: Dec 13 14:33:54.619 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:33:54.632932 coreos-metadata[1704]: Dec 13 14:33:54.632 INFO Fetch successful Dec 13 14:33:54.632932 coreos-metadata[1704]: Dec 13 14:33:54.632 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:33:54.637964 coreos-metadata[1704]: Dec 13 14:33:54.637 INFO Fetch successful Dec 13 14:33:54.657688 unknown[1704]: wrote ssh authorized keys file for user: core Dec 13 14:33:54.676315 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO Create new startup processor Dec 13 14:33:54.683333 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:33:54.683504 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO Initializing bookkeeping folders Dec 13 14:33:54.685532 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO removing the completed state files Dec 13 14:33:54.685665 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:33:54.685742 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:33:54.685808 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO Initializing healthcheck folders for long running plugins Dec 13 14:33:54.685967 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO Initializing locations for inventory plugin Dec 13 14:33:54.686065 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO Initializing default location for custom inventory Dec 13 14:33:54.686131 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO Initializing default location for file inventory Dec 13 14:33:54.686195 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO Initializing default location for role inventory Dec 13 14:33:54.686258 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO Init the cloudwatchlogs publisher Dec 13 14:33:54.686325 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [instanceID=i-0c7841024e592caba] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:33:54.686410 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [instanceID=i-0c7841024e592caba] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:33:54.686475 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [instanceID=i-0c7841024e592caba] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:33:54.686543 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [instanceID=i-0c7841024e592caba] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:33:54.686605 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [instanceID=i-0c7841024e592caba] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:33:54.686663 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [instanceID=i-0c7841024e592caba] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:33:54.689277 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [instanceID=i-0c7841024e592caba] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:33:54.689277 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [instanceID=i-0c7841024e592caba] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:33:54.689277 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [instanceID=i-0c7841024e592caba] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:33:54.689277 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [instanceID=i-0c7841024e592caba] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:33:54.689277 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:33:54.689277 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO OS: linux, Arch: amd64 Dec 13 14:33:54.702352 update-ssh-keys[1884]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:33:54.699087 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:33:54.715908 amazon-ssm-agent[1702]: datastore file /var/lib/amazon/ssm/i-0c7841024e592caba/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:33:54.778268 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:33:54.872726 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:33:54.971794 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:33:55.037549 locksmithd[1774]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:33:55.066341 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0c7841024e592caba, requestId: 19e62d10-926b-470b-8512-b57c734a76f8 Dec 13 14:33:55.163897 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:33:55.258225 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:33:55.353489 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:33:55.448855 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:33:55.544485 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:33:55.640993 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [instanceID=i-0c7841024e592caba] Starting association polling Dec 13 14:33:55.737425 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:33:55.833537 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:33:55.929853 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:33:55.957502 systemd[1]: Started kubelet.service. Dec 13 14:33:56.027560 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:33:56.125187 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:33:56.222458 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessageGatewayService] listening reply. Dec 13 14:33:56.319677 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:33:56.416848 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [OfflineService] Starting document processing engine... Dec 13 14:33:56.514826 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:33:56.550990 sshd_keygen[1727]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:33:56.588608 systemd[1]: Finished sshd-keygen.service. Dec 13 14:33:56.591568 systemd[1]: Starting issuegen.service... Dec 13 14:33:56.601559 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:33:56.601787 systemd[1]: Finished issuegen.service. Dec 13 14:33:56.604572 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:33:56.613804 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:33:56.632470 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:33:56.636256 systemd[1]: Started getty@tty1.service. Dec 13 14:33:56.639043 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:33:56.640500 systemd[1]: Reached target getty.target. Dec 13 14:33:56.641737 systemd[1]: Reached target multi-user.target. Dec 13 14:33:56.645097 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:33:56.664840 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:33:56.665399 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:33:56.667102 systemd[1]: Startup finished in 703ms (kernel) + 8.337s (initrd) + 11.702s (userspace) = 20.743s. Dec 13 14:33:56.711712 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [OfflineService] Starting message polling Dec 13 14:33:56.809774 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [OfflineService] Starting send replies to MDS Dec 13 14:33:56.909275 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:33:57.007750 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:33:57.082699 kubelet[1902]: E1213 14:33:57.082632 1902 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:33:57.086160 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:33:57.086341 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:33:57.086656 systemd[1]: kubelet.service: Consumed 1.222s CPU time. Dec 13 14:33:57.106390 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:33:57.205366 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:33:57.304412 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:33:57.403697 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:33:57.503334 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:33:57.602817 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0c7841024e592caba?role=subscribe&stream=input Dec 13 14:33:57.702672 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0c7841024e592caba?role=subscribe&stream=input Dec 13 14:33:57.802813 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:33:57.903026 amazon-ssm-agent[1702]: 2024-12-13 14:33:54 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:34:01.891206 systemd[1]: Created slice system-sshd.slice. Dec 13 14:34:01.896389 systemd[1]: Started sshd@0-172.31.23.152:22-139.178.89.65:50760.service. Dec 13 14:34:02.193415 sshd[1923]: Accepted publickey for core from 139.178.89.65 port 50760 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:02.203908 sshd[1923]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:02.261397 systemd[1]: Created slice user-500.slice. Dec 13 14:34:02.272298 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:34:02.306949 systemd-logind[1714]: New session 1 of user core. Dec 13 14:34:02.325970 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:34:02.328703 systemd[1]: Starting user@500.service... Dec 13 14:34:02.350171 (systemd)[1926]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:02.598142 systemd[1926]: Queued start job for default target default.target. Dec 13 14:34:02.599052 systemd[1926]: Reached target paths.target. Dec 13 14:34:02.599088 systemd[1926]: Reached target sockets.target. Dec 13 14:34:02.599107 systemd[1926]: Reached target timers.target. Dec 13 14:34:02.599125 systemd[1926]: Reached target basic.target. Dec 13 14:34:02.599273 systemd[1]: Started user@500.service. Dec 13 14:34:02.602130 systemd[1]: Started session-1.scope. Dec 13 14:34:02.603010 systemd[1926]: Reached target default.target. Dec 13 14:34:02.603253 systemd[1926]: Startup finished in 227ms. Dec 13 14:34:02.841094 systemd[1]: Started sshd@1-172.31.23.152:22-139.178.89.65:50762.service. Dec 13 14:34:03.038662 sshd[1935]: Accepted publickey for core from 139.178.89.65 port 50762 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:03.042063 sshd[1935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:03.124857 systemd-logind[1714]: New session 2 of user core. Dec 13 14:34:03.126244 systemd[1]: Started session-2.scope. Dec 13 14:34:03.437351 sshd[1935]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:03.446152 systemd[1]: sshd@1-172.31.23.152:22-139.178.89.65:50762.service: Deactivated successfully. Dec 13 14:34:03.456847 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:34:03.461337 systemd-logind[1714]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:34:03.481941 systemd[1]: Started sshd@2-172.31.23.152:22-139.178.89.65:50778.service. Dec 13 14:34:03.483320 systemd-logind[1714]: Removed session 2. Dec 13 14:34:03.695355 sshd[1941]: Accepted publickey for core from 139.178.89.65 port 50778 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:03.697372 sshd[1941]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:03.745613 systemd-logind[1714]: New session 3 of user core. Dec 13 14:34:03.746315 systemd[1]: Started session-3.scope. Dec 13 14:34:04.082815 sshd[1941]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:04.110209 systemd[1]: sshd@2-172.31.23.152:22-139.178.89.65:50778.service: Deactivated successfully. Dec 13 14:34:04.116863 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:34:04.120840 systemd-logind[1714]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:34:04.129404 systemd[1]: Started sshd@3-172.31.23.152:22-139.178.89.65:50788.service. Dec 13 14:34:04.131499 systemd-logind[1714]: Removed session 3. Dec 13 14:34:04.330864 sshd[1947]: Accepted publickey for core from 139.178.89.65 port 50788 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:04.332467 sshd[1947]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:04.348113 systemd[1]: Started session-4.scope. Dec 13 14:34:04.348936 systemd-logind[1714]: New session 4 of user core. Dec 13 14:34:04.497638 sshd[1947]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:04.509065 systemd-logind[1714]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:34:04.509314 systemd[1]: sshd@3-172.31.23.152:22-139.178.89.65:50788.service: Deactivated successfully. Dec 13 14:34:04.510292 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:34:04.520920 systemd-logind[1714]: Removed session 4. Dec 13 14:34:04.531788 systemd[1]: Started sshd@4-172.31.23.152:22-139.178.89.65:50802.service. Dec 13 14:34:04.709015 sshd[1953]: Accepted publickey for core from 139.178.89.65 port 50802 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:04.710195 sshd[1953]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:04.733184 systemd-logind[1714]: New session 5 of user core. Dec 13 14:34:04.733908 systemd[1]: Started session-5.scope. Dec 13 14:34:04.891125 sudo[1956]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:34:04.891474 sudo[1956]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:34:04.910355 systemd[1]: Starting coreos-metadata.service... Dec 13 14:34:05.010165 coreos-metadata[1960]: Dec 13 14:34:05.009 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:34:05.011425 coreos-metadata[1960]: Dec 13 14:34:05.011 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Dec 13 14:34:05.012579 coreos-metadata[1960]: Dec 13 14:34:05.012 INFO Fetch successful Dec 13 14:34:05.012579 coreos-metadata[1960]: Dec 13 14:34:05.012 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Dec 13 14:34:05.013290 coreos-metadata[1960]: Dec 13 14:34:05.013 INFO Fetch successful Dec 13 14:34:05.013290 coreos-metadata[1960]: Dec 13 14:34:05.013 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Dec 13 14:34:05.020417 coreos-metadata[1960]: Dec 13 14:34:05.020 INFO Fetch successful Dec 13 14:34:05.020679 coreos-metadata[1960]: Dec 13 14:34:05.020 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Dec 13 14:34:05.021307 coreos-metadata[1960]: Dec 13 14:34:05.021 INFO Fetch successful Dec 13 14:34:05.021439 coreos-metadata[1960]: Dec 13 14:34:05.021 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Dec 13 14:34:05.022499 coreos-metadata[1960]: Dec 13 14:34:05.022 INFO Fetch successful Dec 13 14:34:05.022627 coreos-metadata[1960]: Dec 13 14:34:05.022 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Dec 13 14:34:05.023508 coreos-metadata[1960]: Dec 13 14:34:05.023 INFO Fetch successful Dec 13 14:34:05.023508 coreos-metadata[1960]: Dec 13 14:34:05.023 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Dec 13 14:34:05.024513 coreos-metadata[1960]: Dec 13 14:34:05.024 INFO Fetch successful Dec 13 14:34:05.024607 coreos-metadata[1960]: Dec 13 14:34:05.024 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Dec 13 14:34:05.025478 coreos-metadata[1960]: Dec 13 14:34:05.025 INFO Fetch successful Dec 13 14:34:05.035284 systemd[1]: Finished coreos-metadata.service. Dec 13 14:34:05.041861 amazon-ssm-agent[1702]: 2024-12-13 14:34:05 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:34:06.499256 systemd[1]: Stopped kubelet.service. Dec 13 14:34:06.499603 systemd[1]: kubelet.service: Consumed 1.222s CPU time. Dec 13 14:34:06.518938 systemd[1]: Starting kubelet.service... Dec 13 14:34:06.628437 systemd[1]: Reloading. Dec 13 14:34:06.811718 /usr/lib/systemd/system-generators/torcx-generator[2016]: time="2024-12-13T14:34:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:34:06.811759 /usr/lib/systemd/system-generators/torcx-generator[2016]: time="2024-12-13T14:34:06Z" level=info msg="torcx already run" Dec 13 14:34:07.030090 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:34:07.030113 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:34:07.078367 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:34:07.366037 systemd[1]: Started kubelet.service. Dec 13 14:34:07.370761 systemd[1]: Stopping kubelet.service... Dec 13 14:34:07.375668 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:34:07.379273 systemd[1]: Stopped kubelet.service. Dec 13 14:34:07.381720 systemd[1]: Starting kubelet.service... Dec 13 14:34:07.605795 systemd[1]: Started kubelet.service. Dec 13 14:34:07.713469 kubelet[2073]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:34:07.713469 kubelet[2073]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:34:07.713469 kubelet[2073]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:34:07.715662 kubelet[2073]: I1213 14:34:07.715609 2073 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:34:08.349857 kubelet[2073]: I1213 14:34:08.349797 2073 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:34:08.349857 kubelet[2073]: I1213 14:34:08.349838 2073 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:34:08.350383 kubelet[2073]: I1213 14:34:08.350350 2073 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:34:08.404082 kubelet[2073]: I1213 14:34:08.404039 2073 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:34:08.437162 kubelet[2073]: E1213 14:34:08.437040 2073 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:34:08.437162 kubelet[2073]: I1213 14:34:08.437167 2073 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:34:08.454568 kubelet[2073]: I1213 14:34:08.454534 2073 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:34:08.459642 kubelet[2073]: I1213 14:34:08.459604 2073 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:34:08.460980 kubelet[2073]: I1213 14:34:08.460928 2073 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:34:08.461209 kubelet[2073]: I1213 14:34:08.460978 2073 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172.31.23.152","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:34:08.461339 kubelet[2073]: I1213 14:34:08.461216 2073 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:34:08.461339 kubelet[2073]: I1213 14:34:08.461232 2073 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:34:08.461432 kubelet[2073]: I1213 14:34:08.461385 2073 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:34:08.482363 kubelet[2073]: I1213 14:34:08.482318 2073 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:34:08.482559 kubelet[2073]: I1213 14:34:08.482375 2073 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:34:08.482559 kubelet[2073]: I1213 14:34:08.482427 2073 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:34:08.482559 kubelet[2073]: I1213 14:34:08.482446 2073 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:34:08.488578 kubelet[2073]: E1213 14:34:08.488527 2073 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:08.488894 kubelet[2073]: E1213 14:34:08.488593 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:08.505451 kubelet[2073]: W1213 14:34:08.505399 2073 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "172.31.23.152" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 14:34:08.505624 kubelet[2073]: E1213 14:34:08.505461 2073 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"172.31.23.152\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 14:34:08.505624 kubelet[2073]: W1213 14:34:08.505614 2073 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 14:34:08.505944 kubelet[2073]: E1213 14:34:08.505635 2073 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Dec 13 14:34:08.512608 kubelet[2073]: I1213 14:34:08.512574 2073 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:34:08.523482 kubelet[2073]: I1213 14:34:08.523433 2073 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:34:08.523661 kubelet[2073]: W1213 14:34:08.523537 2073 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:34:08.525371 kubelet[2073]: I1213 14:34:08.525344 2073 server.go:1269] "Started kubelet" Dec 13 14:34:08.532692 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:34:08.532889 kubelet[2073]: I1213 14:34:08.532591 2073 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:34:08.542019 kubelet[2073]: I1213 14:34:08.539317 2073 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:34:08.546264 kubelet[2073]: I1213 14:34:08.546036 2073 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:34:08.552252 kubelet[2073]: I1213 14:34:08.552179 2073 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:34:08.554708 kubelet[2073]: I1213 14:34:08.554677 2073 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:34:08.555362 kubelet[2073]: I1213 14:34:08.555340 2073 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:34:08.562997 kubelet[2073]: I1213 14:34:08.561754 2073 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:34:08.562997 kubelet[2073]: E1213 14:34:08.561992 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:08.564932 kubelet[2073]: I1213 14:34:08.564906 2073 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:34:08.566723 kubelet[2073]: I1213 14:34:08.565032 2073 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:34:08.570636 kubelet[2073]: I1213 14:34:08.570405 2073 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:34:08.570636 kubelet[2073]: I1213 14:34:08.570601 2073 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:34:08.583104 kubelet[2073]: E1213 14:34:08.583009 2073 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:34:08.587602 kubelet[2073]: I1213 14:34:08.587563 2073 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:34:08.621138 kubelet[2073]: E1213 14:34:08.620999 2073 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.23.152\" not found" node="172.31.23.152" Dec 13 14:34:08.626415 kubelet[2073]: I1213 14:34:08.626315 2073 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:34:08.626649 kubelet[2073]: I1213 14:34:08.626618 2073 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:34:08.626649 kubelet[2073]: I1213 14:34:08.626652 2073 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:34:08.630349 kubelet[2073]: I1213 14:34:08.630303 2073 policy_none.go:49] "None policy: Start" Dec 13 14:34:08.633179 kubelet[2073]: I1213 14:34:08.631651 2073 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:34:08.633179 kubelet[2073]: I1213 14:34:08.631698 2073 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:34:08.643569 systemd[1]: Created slice kubepods.slice. Dec 13 14:34:08.667273 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:34:08.669628 kubelet[2073]: E1213 14:34:08.668610 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:08.673693 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:34:08.682314 kubelet[2073]: I1213 14:34:08.682281 2073 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:34:08.682539 kubelet[2073]: I1213 14:34:08.682524 2073 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:34:08.682604 kubelet[2073]: I1213 14:34:08.682542 2073 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:34:08.683252 kubelet[2073]: I1213 14:34:08.683236 2073 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:34:08.687495 kubelet[2073]: E1213 14:34:08.687469 2073 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.23.152\" not found" Dec 13 14:34:08.790835 kubelet[2073]: I1213 14:34:08.790805 2073 kubelet_node_status.go:72] "Attempting to register node" node="172.31.23.152" Dec 13 14:34:08.796106 kubelet[2073]: I1213 14:34:08.796074 2073 kubelet_node_status.go:75] "Successfully registered node" node="172.31.23.152" Dec 13 14:34:08.796328 kubelet[2073]: E1213 14:34:08.796314 2073 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172.31.23.152\": node \"172.31.23.152\" not found" Dec 13 14:34:08.805021 kubelet[2073]: I1213 14:34:08.804967 2073 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:34:08.807616 kubelet[2073]: I1213 14:34:08.807572 2073 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:34:08.807616 kubelet[2073]: I1213 14:34:08.807614 2073 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:34:08.807891 kubelet[2073]: I1213 14:34:08.807688 2073 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:34:08.807891 kubelet[2073]: E1213 14:34:08.807763 2073 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 14:34:08.823273 kubelet[2073]: E1213 14:34:08.823225 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:08.924428 kubelet[2073]: E1213 14:34:08.924270 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:09.025432 kubelet[2073]: E1213 14:34:09.025381 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:09.126093 kubelet[2073]: E1213 14:34:09.126039 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:09.176405 sudo[1956]: pam_unix(sudo:session): session closed for user root Dec 13 14:34:09.201643 sshd[1953]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:09.207382 systemd[1]: sshd@4-172.31.23.152:22-139.178.89.65:50802.service: Deactivated successfully. Dec 13 14:34:09.209167 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:34:09.211373 systemd-logind[1714]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:34:09.213920 systemd-logind[1714]: Removed session 5. Dec 13 14:34:09.226811 kubelet[2073]: E1213 14:34:09.226747 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:09.327440 kubelet[2073]: E1213 14:34:09.327388 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:09.354716 kubelet[2073]: I1213 14:34:09.354666 2073 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 14:34:09.354959 kubelet[2073]: W1213 14:34:09.354937 2073 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:34:09.355051 kubelet[2073]: W1213 14:34:09.354951 2073 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 14:34:09.428131 kubelet[2073]: E1213 14:34:09.427981 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:09.489745 kubelet[2073]: E1213 14:34:09.489666 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:09.528497 kubelet[2073]: E1213 14:34:09.528434 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:09.629152 kubelet[2073]: E1213 14:34:09.629056 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:09.730246 kubelet[2073]: E1213 14:34:09.730114 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:09.830926 kubelet[2073]: E1213 14:34:09.830780 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:09.931948 kubelet[2073]: E1213 14:34:09.931842 2073 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172.31.23.152\" not found" Dec 13 14:34:10.033512 kubelet[2073]: I1213 14:34:10.033484 2073 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 14:34:10.034132 env[1718]: time="2024-12-13T14:34:10.033966604Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:34:10.034559 kubelet[2073]: I1213 14:34:10.034345 2073 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 14:34:10.489642 kubelet[2073]: I1213 14:34:10.489465 2073 apiserver.go:52] "Watching apiserver" Dec 13 14:34:10.490001 kubelet[2073]: E1213 14:34:10.489908 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:10.515521 systemd[1]: Created slice kubepods-burstable-pod9536e26b_de09_4314_ae82_cb9537a031ba.slice. Dec 13 14:34:10.529200 systemd[1]: Created slice kubepods-besteffort-podf0a1ebcf_17fc_4925_a00a_1fce9eb7e57b.slice. Dec 13 14:34:10.566020 kubelet[2073]: I1213 14:34:10.565970 2073 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 14:34:10.579980 kubelet[2073]: I1213 14:34:10.579857 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-etc-cni-netd\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.580193 kubelet[2073]: I1213 14:34:10.580045 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9536e26b-de09-4314-ae82-cb9537a031ba-cilium-config-path\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.580193 kubelet[2073]: I1213 14:34:10.580077 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9536e26b-de09-4314-ae82-cb9537a031ba-hubble-tls\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.580193 kubelet[2073]: I1213 14:34:10.580106 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0a1ebcf-17fc-4925-a00a-1fce9eb7e57b-lib-modules\") pod \"kube-proxy-srm92\" (UID: \"f0a1ebcf-17fc-4925-a00a-1fce9eb7e57b\") " pod="kube-system/kube-proxy-srm92" Dec 13 14:34:10.580193 kubelet[2073]: I1213 14:34:10.580128 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-bpf-maps\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.580193 kubelet[2073]: I1213 14:34:10.580149 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-lib-modules\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.580193 kubelet[2073]: I1213 14:34:10.580173 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bclx6\" (UniqueName: \"kubernetes.io/projected/9536e26b-de09-4314-ae82-cb9537a031ba-kube-api-access-bclx6\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.580471 kubelet[2073]: I1213 14:34:10.580196 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f0a1ebcf-17fc-4925-a00a-1fce9eb7e57b-kube-proxy\") pod \"kube-proxy-srm92\" (UID: \"f0a1ebcf-17fc-4925-a00a-1fce9eb7e57b\") " pod="kube-system/kube-proxy-srm92" Dec 13 14:34:10.580471 kubelet[2073]: I1213 14:34:10.580219 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-hostproc\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.580471 kubelet[2073]: I1213 14:34:10.580247 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-cni-path\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.580471 kubelet[2073]: I1213 14:34:10.580270 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9536e26b-de09-4314-ae82-cb9537a031ba-clustermesh-secrets\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.580471 kubelet[2073]: I1213 14:34:10.580296 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-host-proc-sys-kernel\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.580471 kubelet[2073]: I1213 14:34:10.580319 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f0a1ebcf-17fc-4925-a00a-1fce9eb7e57b-xtables-lock\") pod \"kube-proxy-srm92\" (UID: \"f0a1ebcf-17fc-4925-a00a-1fce9eb7e57b\") " pod="kube-system/kube-proxy-srm92" Dec 13 14:34:10.580981 kubelet[2073]: I1213 14:34:10.580344 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frxvb\" (UniqueName: \"kubernetes.io/projected/f0a1ebcf-17fc-4925-a00a-1fce9eb7e57b-kube-api-access-frxvb\") pod \"kube-proxy-srm92\" (UID: \"f0a1ebcf-17fc-4925-a00a-1fce9eb7e57b\") " pod="kube-system/kube-proxy-srm92" Dec 13 14:34:10.580981 kubelet[2073]: I1213 14:34:10.580371 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-xtables-lock\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.580981 kubelet[2073]: I1213 14:34:10.580394 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-host-proc-sys-net\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.580981 kubelet[2073]: I1213 14:34:10.580426 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-cilium-run\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.580981 kubelet[2073]: I1213 14:34:10.580448 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-cilium-cgroup\") pod \"cilium-kd5cg\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " pod="kube-system/cilium-kd5cg" Dec 13 14:34:10.681517 kubelet[2073]: I1213 14:34:10.681458 2073 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 14:34:10.828374 env[1718]: time="2024-12-13T14:34:10.828319062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kd5cg,Uid:9536e26b-de09-4314-ae82-cb9537a031ba,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:10.846230 env[1718]: time="2024-12-13T14:34:10.846173393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-srm92,Uid:f0a1ebcf-17fc-4925-a00a-1fce9eb7e57b,Namespace:kube-system,Attempt:0,}" Dec 13 14:34:11.445123 env[1718]: time="2024-12-13T14:34:11.444813734Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:11.446658 env[1718]: time="2024-12-13T14:34:11.446606619Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:11.450632 env[1718]: time="2024-12-13T14:34:11.450582940Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:11.454079 env[1718]: time="2024-12-13T14:34:11.454029652Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:11.454778 env[1718]: time="2024-12-13T14:34:11.454739304Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:11.458079 env[1718]: time="2024-12-13T14:34:11.458031532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:11.459002 env[1718]: time="2024-12-13T14:34:11.458963559Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:11.459946 env[1718]: time="2024-12-13T14:34:11.459913566Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:11.490635 kubelet[2073]: E1213 14:34:11.490570 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:11.548780 env[1718]: time="2024-12-13T14:34:11.548673699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:11.548780 env[1718]: time="2024-12-13T14:34:11.548725729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:11.549196 env[1718]: time="2024-12-13T14:34:11.548742460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:11.549196 env[1718]: time="2024-12-13T14:34:11.549084501Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0e0bbf505c8320cc2dc435f3a32ec1329b92a05d878188988d6bee2fe6671964 pid=2135 runtime=io.containerd.runc.v2 Dec 13 14:34:11.549423 env[1718]: time="2024-12-13T14:34:11.549353576Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:11.549515 env[1718]: time="2024-12-13T14:34:11.549439628Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:11.549515 env[1718]: time="2024-12-13T14:34:11.549473865Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:11.549721 env[1718]: time="2024-12-13T14:34:11.549639352Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858 pid=2134 runtime=io.containerd.runc.v2 Dec 13 14:34:11.644296 systemd[1]: Started cri-containerd-ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858.scope. Dec 13 14:34:11.726362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2112730454.mount: Deactivated successfully. Dec 13 14:34:11.764076 systemd[1]: Started cri-containerd-0e0bbf505c8320cc2dc435f3a32ec1329b92a05d878188988d6bee2fe6671964.scope. Dec 13 14:34:11.836145 env[1718]: time="2024-12-13T14:34:11.836070269Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kd5cg,Uid:9536e26b-de09-4314-ae82-cb9537a031ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\"" Dec 13 14:34:11.842514 env[1718]: time="2024-12-13T14:34:11.842462336Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:34:11.850450 env[1718]: time="2024-12-13T14:34:11.850395860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-srm92,Uid:f0a1ebcf-17fc-4925-a00a-1fce9eb7e57b,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e0bbf505c8320cc2dc435f3a32ec1329b92a05d878188988d6bee2fe6671964\"" Dec 13 14:34:12.491038 kubelet[2073]: E1213 14:34:12.490972 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:13.491912 kubelet[2073]: E1213 14:34:13.491814 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:14.492204 kubelet[2073]: E1213 14:34:14.492132 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:15.493301 kubelet[2073]: E1213 14:34:15.493208 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:16.493987 kubelet[2073]: E1213 14:34:16.493884 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:17.494868 kubelet[2073]: E1213 14:34:17.494822 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:18.495146 kubelet[2073]: E1213 14:34:18.495047 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:19.495634 kubelet[2073]: E1213 14:34:19.495547 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:19.834833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3884767420.mount: Deactivated successfully. Dec 13 14:34:20.496699 kubelet[2073]: E1213 14:34:20.496618 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:21.497100 kubelet[2073]: E1213 14:34:21.497032 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:22.497641 kubelet[2073]: E1213 14:34:22.497551 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:23.498775 kubelet[2073]: E1213 14:34:23.498571 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:24.189101 env[1718]: time="2024-12-13T14:34:24.188993509Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:24.192606 env[1718]: time="2024-12-13T14:34:24.192547449Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:24.194553 env[1718]: time="2024-12-13T14:34:24.194508194Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:24.195198 env[1718]: time="2024-12-13T14:34:24.195159346Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:34:24.199329 env[1718]: time="2024-12-13T14:34:24.199292670Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 14:34:24.204199 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:34:24.209984 env[1718]: time="2024-12-13T14:34:24.209922315Z" level=info msg="CreateContainer within sandbox \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:34:24.232995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2082160980.mount: Deactivated successfully. Dec 13 14:34:24.242419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1202862734.mount: Deactivated successfully. Dec 13 14:34:24.251273 env[1718]: time="2024-12-13T14:34:24.251220250Z" level=info msg="CreateContainer within sandbox \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348\"" Dec 13 14:34:24.252448 env[1718]: time="2024-12-13T14:34:24.252388279Z" level=info msg="StartContainer for \"f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348\"" Dec 13 14:34:24.280636 systemd[1]: Started cri-containerd-f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348.scope. Dec 13 14:34:24.329633 env[1718]: time="2024-12-13T14:34:24.329566997Z" level=info msg="StartContainer for \"f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348\" returns successfully" Dec 13 14:34:24.342309 systemd[1]: cri-containerd-f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348.scope: Deactivated successfully. Dec 13 14:34:24.499572 kubelet[2073]: E1213 14:34:24.499516 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:24.571619 env[1718]: time="2024-12-13T14:34:24.571541848Z" level=info msg="shim disconnected" id=f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348 Dec 13 14:34:24.571619 env[1718]: time="2024-12-13T14:34:24.571601761Z" level=warning msg="cleaning up after shim disconnected" id=f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348 namespace=k8s.io Dec 13 14:34:24.571619 env[1718]: time="2024-12-13T14:34:24.571615175Z" level=info msg="cleaning up dead shim" Dec 13 14:34:24.593849 env[1718]: time="2024-12-13T14:34:24.593570497Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2253 runtime=io.containerd.runc.v2\n" Dec 13 14:34:24.974929 env[1718]: time="2024-12-13T14:34:24.968661838Z" level=info msg="CreateContainer within sandbox \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:34:25.026681 env[1718]: time="2024-12-13T14:34:25.026606722Z" level=info msg="CreateContainer within sandbox \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720\"" Dec 13 14:34:25.027797 env[1718]: time="2024-12-13T14:34:25.027747186Z" level=info msg="StartContainer for \"5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720\"" Dec 13 14:34:25.060049 systemd[1]: Started cri-containerd-5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720.scope. Dec 13 14:34:25.117468 env[1718]: time="2024-12-13T14:34:25.117403698Z" level=info msg="StartContainer for \"5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720\" returns successfully" Dec 13 14:34:25.132300 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:34:25.132648 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:34:25.133686 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:34:25.136306 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:34:25.151304 systemd[1]: cri-containerd-5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720.scope: Deactivated successfully. Dec 13 14:34:25.159528 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:34:25.232032 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348-rootfs.mount: Deactivated successfully. Dec 13 14:34:25.267123 env[1718]: time="2024-12-13T14:34:25.266958562Z" level=info msg="shim disconnected" id=5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720 Dec 13 14:34:25.267123 env[1718]: time="2024-12-13T14:34:25.267120275Z" level=warning msg="cleaning up after shim disconnected" id=5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720 namespace=k8s.io Dec 13 14:34:25.267606 env[1718]: time="2024-12-13T14:34:25.267137299Z" level=info msg="cleaning up dead shim" Dec 13 14:34:25.285082 env[1718]: time="2024-12-13T14:34:25.285029347Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2319 runtime=io.containerd.runc.v2\n" Dec 13 14:34:25.500951 kubelet[2073]: E1213 14:34:25.500837 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:25.742967 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount666908545.mount: Deactivated successfully. Dec 13 14:34:25.970267 env[1718]: time="2024-12-13T14:34:25.970131359Z" level=info msg="CreateContainer within sandbox \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:34:25.998592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1206734743.mount: Deactivated successfully. Dec 13 14:34:26.020783 env[1718]: time="2024-12-13T14:34:26.020639082Z" level=info msg="CreateContainer within sandbox \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99\"" Dec 13 14:34:26.022079 env[1718]: time="2024-12-13T14:34:26.022035912Z" level=info msg="StartContainer for \"abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99\"" Dec 13 14:34:26.064748 systemd[1]: Started cri-containerd-abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99.scope. Dec 13 14:34:26.218230 systemd[1]: cri-containerd-abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99.scope: Deactivated successfully. Dec 13 14:34:26.223131 env[1718]: time="2024-12-13T14:34:26.221224316Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9536e26b_de09_4314_ae82_cb9537a031ba.slice/cri-containerd-abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99.scope/memory.events\": no such file or directory" Dec 13 14:34:26.229481 env[1718]: time="2024-12-13T14:34:26.226113536Z" level=info msg="StartContainer for \"abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99\" returns successfully" Dec 13 14:34:26.227715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1245356432.mount: Deactivated successfully. Dec 13 14:34:26.268047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99-rootfs.mount: Deactivated successfully. Dec 13 14:34:26.356964 env[1718]: time="2024-12-13T14:34:26.356828649Z" level=info msg="shim disconnected" id=abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99 Dec 13 14:34:26.357726 env[1718]: time="2024-12-13T14:34:26.357697367Z" level=warning msg="cleaning up after shim disconnected" id=abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99 namespace=k8s.io Dec 13 14:34:26.357824 env[1718]: time="2024-12-13T14:34:26.357810119Z" level=info msg="cleaning up dead shim" Dec 13 14:34:26.378761 env[1718]: time="2024-12-13T14:34:26.378705730Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2377 runtime=io.containerd.runc.v2\n" Dec 13 14:34:26.501329 kubelet[2073]: E1213 14:34:26.501214 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:26.976339 env[1718]: time="2024-12-13T14:34:26.976005924Z" level=info msg="CreateContainer within sandbox \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:34:27.017201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2040546747.mount: Deactivated successfully. Dec 13 14:34:27.033637 env[1718]: time="2024-12-13T14:34:27.033556418Z" level=info msg="CreateContainer within sandbox \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a\"" Dec 13 14:34:27.035100 env[1718]: time="2024-12-13T14:34:27.035065801Z" level=info msg="StartContainer for \"36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a\"" Dec 13 14:34:27.090131 systemd[1]: Started cri-containerd-36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a.scope. Dec 13 14:34:27.145988 systemd[1]: cri-containerd-36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a.scope: Deactivated successfully. Dec 13 14:34:27.148407 env[1718]: time="2024-12-13T14:34:27.148325141Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9536e26b_de09_4314_ae82_cb9537a031ba.slice/cri-containerd-36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a.scope/memory.events\": no such file or directory" Dec 13 14:34:27.162099 env[1718]: time="2024-12-13T14:34:27.160690760Z" level=info msg="StartContainer for \"36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a\" returns successfully" Dec 13 14:34:27.227080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1528584364.mount: Deactivated successfully. Dec 13 14:34:27.232283 env[1718]: time="2024-12-13T14:34:27.232233456Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:27.238009 env[1718]: time="2024-12-13T14:34:27.237834984Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:27.247443 env[1718]: time="2024-12-13T14:34:27.247391032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:27.258698 env[1718]: time="2024-12-13T14:34:27.258632506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:27.260603 env[1718]: time="2024-12-13T14:34:27.259504784Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 14:34:27.265306 env[1718]: time="2024-12-13T14:34:27.265254599Z" level=info msg="CreateContainer within sandbox \"0e0bbf505c8320cc2dc435f3a32ec1329b92a05d878188988d6bee2fe6671964\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:34:27.266279 env[1718]: time="2024-12-13T14:34:27.266230831Z" level=info msg="shim disconnected" id=36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a Dec 13 14:34:27.266403 env[1718]: time="2024-12-13T14:34:27.266283243Z" level=warning msg="cleaning up after shim disconnected" id=36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a namespace=k8s.io Dec 13 14:34:27.266403 env[1718]: time="2024-12-13T14:34:27.266296896Z" level=info msg="cleaning up dead shim" Dec 13 14:34:27.293277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1184393029.mount: Deactivated successfully. Dec 13 14:34:27.303940 env[1718]: time="2024-12-13T14:34:27.303269777Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:34:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2435 runtime=io.containerd.runc.v2\n" Dec 13 14:34:27.306115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1965820691.mount: Deactivated successfully. Dec 13 14:34:27.310131 env[1718]: time="2024-12-13T14:34:27.310041766Z" level=info msg="CreateContainer within sandbox \"0e0bbf505c8320cc2dc435f3a32ec1329b92a05d878188988d6bee2fe6671964\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f75ca810d2d5e932bcf88c1202df0c927a9f58f10aac35f81839c9ab6fec885b\"" Dec 13 14:34:27.311957 env[1718]: time="2024-12-13T14:34:27.311864401Z" level=info msg="StartContainer for \"f75ca810d2d5e932bcf88c1202df0c927a9f58f10aac35f81839c9ab6fec885b\"" Dec 13 14:34:27.333502 systemd[1]: Started cri-containerd-f75ca810d2d5e932bcf88c1202df0c927a9f58f10aac35f81839c9ab6fec885b.scope. Dec 13 14:34:27.457501 env[1718]: time="2024-12-13T14:34:27.457442503Z" level=info msg="StartContainer for \"f75ca810d2d5e932bcf88c1202df0c927a9f58f10aac35f81839c9ab6fec885b\" returns successfully" Dec 13 14:34:27.502818 kubelet[2073]: E1213 14:34:27.502715 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:28.002153 env[1718]: time="2024-12-13T14:34:28.002048884Z" level=info msg="CreateContainer within sandbox \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:34:28.063304 env[1718]: time="2024-12-13T14:34:28.062935817Z" level=info msg="CreateContainer within sandbox \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\"" Dec 13 14:34:28.066372 env[1718]: time="2024-12-13T14:34:28.064095416Z" level=info msg="StartContainer for \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\"" Dec 13 14:34:28.122847 systemd[1]: Started cri-containerd-99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3.scope. Dec 13 14:34:28.211769 env[1718]: time="2024-12-13T14:34:28.211712290Z" level=info msg="StartContainer for \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\" returns successfully" Dec 13 14:34:28.288509 systemd[1]: run-containerd-runc-k8s.io-99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3-runc.nFojNs.mount: Deactivated successfully. Dec 13 14:34:28.460533 kubelet[2073]: I1213 14:34:28.460496 2073 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 14:34:28.483584 kubelet[2073]: E1213 14:34:28.483535 2073 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:28.503370 kubelet[2073]: E1213 14:34:28.503283 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:28.949922 kernel: Initializing XFRM netlink socket Dec 13 14:34:29.071578 kubelet[2073]: I1213 14:34:29.071482 2073 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-kd5cg" podStartSLOduration=8.71552683 podStartE2EDuration="21.071455837s" podCreationTimestamp="2024-12-13 14:34:08 +0000 UTC" firstStartedPulling="2024-12-13 14:34:11.841177482 +0000 UTC m=+4.200407070" lastFinishedPulling="2024-12-13 14:34:24.197106493 +0000 UTC m=+16.556336077" observedRunningTime="2024-12-13 14:34:29.07106112 +0000 UTC m=+21.430290727" watchObservedRunningTime="2024-12-13 14:34:29.071455837 +0000 UTC m=+21.430685444" Dec 13 14:34:29.072329 kubelet[2073]: I1213 14:34:29.072266 2073 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-srm92" podStartSLOduration=5.660943241 podStartE2EDuration="21.072243748s" podCreationTimestamp="2024-12-13 14:34:08 +0000 UTC" firstStartedPulling="2024-12-13 14:34:11.851706585 +0000 UTC m=+4.210936169" lastFinishedPulling="2024-12-13 14:34:27.263007083 +0000 UTC m=+19.622236676" observedRunningTime="2024-12-13 14:34:28.052086637 +0000 UTC m=+20.411316241" watchObservedRunningTime="2024-12-13 14:34:29.072243748 +0000 UTC m=+21.431473355" Dec 13 14:34:29.504432 kubelet[2073]: E1213 14:34:29.504373 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:30.505293 kubelet[2073]: E1213 14:34:30.505229 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:30.710853 systemd-networkd[1453]: cilium_host: Link UP Dec 13 14:34:30.719050 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:34:30.719502 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:34:30.716305 (udev-worker)[2741]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:34:30.722283 systemd-networkd[1453]: cilium_net: Link UP Dec 13 14:34:30.722726 systemd-networkd[1453]: cilium_net: Gained carrier Dec 13 14:34:30.722977 systemd-networkd[1453]: cilium_host: Gained carrier Dec 13 14:34:30.726277 (udev-worker)[2742]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:34:30.952383 (udev-worker)[2752]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:34:30.961687 systemd-networkd[1453]: cilium_vxlan: Link UP Dec 13 14:34:30.961698 systemd-networkd[1453]: cilium_vxlan: Gained carrier Dec 13 14:34:31.360081 kernel: NET: Registered PF_ALG protocol family Dec 13 14:34:31.422114 systemd-networkd[1453]: cilium_host: Gained IPv6LL Dec 13 14:34:31.506176 kubelet[2073]: E1213 14:34:31.506102 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:31.742112 systemd-networkd[1453]: cilium_net: Gained IPv6LL Dec 13 14:34:32.264534 systemd[1]: Created slice kubepods-besteffort-pod200f20eb_3a2e_4de4_8304_a455573f7e9d.slice. Dec 13 14:34:32.367810 kubelet[2073]: I1213 14:34:32.367763 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8rhs\" (UniqueName: \"kubernetes.io/projected/200f20eb-3a2e-4de4-8304-a455573f7e9d-kube-api-access-p8rhs\") pod \"nginx-deployment-8587fbcb89-tnwp7\" (UID: \"200f20eb-3a2e-4de4-8304-a455573f7e9d\") " pod="default/nginx-deployment-8587fbcb89-tnwp7" Dec 13 14:34:32.506574 kubelet[2073]: E1213 14:34:32.506532 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:32.573941 env[1718]: time="2024-12-13T14:34:32.573757301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-tnwp7,Uid:200f20eb-3a2e-4de4-8304-a455573f7e9d,Namespace:default,Attempt:0,}" Dec 13 14:34:32.830323 (udev-worker)[2485]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:34:32.837221 systemd-networkd[1453]: lxc_health: Link UP Dec 13 14:34:32.847610 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:34:32.846459 systemd-networkd[1453]: lxc_health: Gained carrier Dec 13 14:34:33.025089 systemd-networkd[1453]: cilium_vxlan: Gained IPv6LL Dec 13 14:34:33.507935 kubelet[2073]: E1213 14:34:33.507866 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:33.654594 systemd-networkd[1453]: lxca3e760e35b35: Link UP Dec 13 14:34:33.670904 kernel: eth0: renamed from tmp8fc0a Dec 13 14:34:33.680753 systemd-networkd[1453]: lxca3e760e35b35: Gained carrier Dec 13 14:34:33.680969 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca3e760e35b35: link becomes ready Dec 13 14:34:34.238171 systemd-networkd[1453]: lxc_health: Gained IPv6LL Dec 13 14:34:34.508890 kubelet[2073]: E1213 14:34:34.508787 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:35.075305 amazon-ssm-agent[1702]: 2024-12-13 14:34:35 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:34:35.198164 systemd-networkd[1453]: lxca3e760e35b35: Gained IPv6LL Dec 13 14:34:35.510210 kubelet[2073]: E1213 14:34:35.510156 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:36.512140 kubelet[2073]: E1213 14:34:36.511841 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:37.512639 kubelet[2073]: E1213 14:34:37.512590 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:38.515073 kubelet[2073]: E1213 14:34:38.515023 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:38.936341 update_engine[1715]: I1213 14:34:38.934983 1715 update_attempter.cc:509] Updating boot flags... Dec 13 14:34:39.496485 kubelet[2073]: I1213 14:34:39.496454 2073 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:34:39.517418 kubelet[2073]: E1213 14:34:39.517377 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:40.523980 kubelet[2073]: E1213 14:34:40.523930 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:40.859293 env[1718]: time="2024-12-13T14:34:40.859113381Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:40.859293 env[1718]: time="2024-12-13T14:34:40.859172757Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:40.859293 env[1718]: time="2024-12-13T14:34:40.859189293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:40.861456 env[1718]: time="2024-12-13T14:34:40.861360939Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8fc0ac8b007c71223f992278654e1fa1bc7b3514a36e0920aba4e9aa30e4d183 pid=3388 runtime=io.containerd.runc.v2 Dec 13 14:34:40.910238 systemd[1]: run-containerd-runc-k8s.io-8fc0ac8b007c71223f992278654e1fa1bc7b3514a36e0920aba4e9aa30e4d183-runc.oTRZj3.mount: Deactivated successfully. Dec 13 14:34:40.940569 systemd[1]: Started cri-containerd-8fc0ac8b007c71223f992278654e1fa1bc7b3514a36e0920aba4e9aa30e4d183.scope. Dec 13 14:34:41.021762 env[1718]: time="2024-12-13T14:34:41.018226058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-tnwp7,Uid:200f20eb-3a2e-4de4-8304-a455573f7e9d,Namespace:default,Attempt:0,} returns sandbox id \"8fc0ac8b007c71223f992278654e1fa1bc7b3514a36e0920aba4e9aa30e4d183\"" Dec 13 14:34:41.030200 env[1718]: time="2024-12-13T14:34:41.030091986Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:34:41.525393 kubelet[2073]: E1213 14:34:41.525341 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:42.526501 kubelet[2073]: E1213 14:34:42.526418 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:43.526828 kubelet[2073]: E1213 14:34:43.526746 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:44.527855 kubelet[2073]: E1213 14:34:44.527805 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:45.395923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1302333198.mount: Deactivated successfully. Dec 13 14:34:45.528009 kubelet[2073]: E1213 14:34:45.527962 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:46.535065 kubelet[2073]: E1213 14:34:46.534981 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:47.535733 kubelet[2073]: E1213 14:34:47.535685 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:47.864516 env[1718]: time="2024-12-13T14:34:47.864370106Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:47.868555 env[1718]: time="2024-12-13T14:34:47.868502886Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:47.872285 env[1718]: time="2024-12-13T14:34:47.872234052Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:47.882016 env[1718]: time="2024-12-13T14:34:47.881946154Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:34:47.883122 env[1718]: time="2024-12-13T14:34:47.883078191Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:34:47.887113 env[1718]: time="2024-12-13T14:34:47.887066689Z" level=info msg="CreateContainer within sandbox \"8fc0ac8b007c71223f992278654e1fa1bc7b3514a36e0920aba4e9aa30e4d183\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 14:34:47.919992 env[1718]: time="2024-12-13T14:34:47.919932868Z" level=info msg="CreateContainer within sandbox \"8fc0ac8b007c71223f992278654e1fa1bc7b3514a36e0920aba4e9aa30e4d183\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a2656f4eab2bed918e7bd373f975f78bca9af457f539e97163cf4c326c306b53\"" Dec 13 14:34:47.920925 env[1718]: time="2024-12-13T14:34:47.920867968Z" level=info msg="StartContainer for \"a2656f4eab2bed918e7bd373f975f78bca9af457f539e97163cf4c326c306b53\"" Dec 13 14:34:47.971283 systemd[1]: Started cri-containerd-a2656f4eab2bed918e7bd373f975f78bca9af457f539e97163cf4c326c306b53.scope. Dec 13 14:34:48.023181 env[1718]: time="2024-12-13T14:34:48.023024277Z" level=info msg="StartContainer for \"a2656f4eab2bed918e7bd373f975f78bca9af457f539e97163cf4c326c306b53\" returns successfully" Dec 13 14:34:48.139122 kubelet[2073]: I1213 14:34:48.138927 2073 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-tnwp7" podStartSLOduration=9.28344978 podStartE2EDuration="16.138904196s" podCreationTimestamp="2024-12-13 14:34:32 +0000 UTC" firstStartedPulling="2024-12-13 14:34:41.02955168 +0000 UTC m=+33.388781270" lastFinishedPulling="2024-12-13 14:34:47.885006094 +0000 UTC m=+40.244235686" observedRunningTime="2024-12-13 14:34:48.138842615 +0000 UTC m=+40.498072222" watchObservedRunningTime="2024-12-13 14:34:48.138904196 +0000 UTC m=+40.498133805" Dec 13 14:34:48.482974 kubelet[2073]: E1213 14:34:48.482815 2073 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:48.536905 kubelet[2073]: E1213 14:34:48.536841 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:48.904490 systemd[1]: run-containerd-runc-k8s.io-a2656f4eab2bed918e7bd373f975f78bca9af457f539e97163cf4c326c306b53-runc.snE5BA.mount: Deactivated successfully. Dec 13 14:34:49.537991 kubelet[2073]: E1213 14:34:49.537886 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:50.538440 kubelet[2073]: E1213 14:34:50.538386 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:51.539052 kubelet[2073]: E1213 14:34:51.538990 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:52.539674 kubelet[2073]: E1213 14:34:52.539614 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:53.540614 kubelet[2073]: E1213 14:34:53.540550 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:54.541780 kubelet[2073]: E1213 14:34:54.541722 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:55.542934 kubelet[2073]: E1213 14:34:55.542857 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:56.543108 kubelet[2073]: E1213 14:34:56.543040 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:57.544037 kubelet[2073]: E1213 14:34:57.543983 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:57.698299 systemd[1]: Created slice kubepods-besteffort-pod6eceec34_97ac_412e_bb37_f73c77bd559c.slice. Dec 13 14:34:57.721587 kubelet[2073]: I1213 14:34:57.721544 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/6eceec34-97ac-412e-bb37-f73c77bd559c-data\") pod \"nfs-server-provisioner-0\" (UID: \"6eceec34-97ac-412e-bb37-f73c77bd559c\") " pod="default/nfs-server-provisioner-0" Dec 13 14:34:57.721862 kubelet[2073]: I1213 14:34:57.721842 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w55nm\" (UniqueName: \"kubernetes.io/projected/6eceec34-97ac-412e-bb37-f73c77bd559c-kube-api-access-w55nm\") pod \"nfs-server-provisioner-0\" (UID: \"6eceec34-97ac-412e-bb37-f73c77bd559c\") " pod="default/nfs-server-provisioner-0" Dec 13 14:34:58.029295 env[1718]: time="2024-12-13T14:34:58.028454088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6eceec34-97ac-412e-bb37-f73c77bd559c,Namespace:default,Attempt:0,}" Dec 13 14:34:58.149662 (udev-worker)[3480]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:34:58.159762 (udev-worker)[3497]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:34:58.167581 systemd-networkd[1453]: lxc85e1a786d821: Link UP Dec 13 14:34:58.195383 kernel: eth0: renamed from tmp031b0 Dec 13 14:34:58.220854 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:34:58.221050 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc85e1a786d821: link becomes ready Dec 13 14:34:58.221208 systemd-networkd[1453]: lxc85e1a786d821: Gained carrier Dec 13 14:34:58.545830 kubelet[2073]: E1213 14:34:58.545744 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:58.563810 env[1718]: time="2024-12-13T14:34:58.563626845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:34:58.563810 env[1718]: time="2024-12-13T14:34:58.563676437Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:34:58.563810 env[1718]: time="2024-12-13T14:34:58.563777759Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:34:58.564422 env[1718]: time="2024-12-13T14:34:58.564366019Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/031b0fc0e4c9e4fd3adc4233b8d3b4e491f184682d8e0755faefa2266c43341d pid=3510 runtime=io.containerd.runc.v2 Dec 13 14:34:58.599692 systemd[1]: Started cri-containerd-031b0fc0e4c9e4fd3adc4233b8d3b4e491f184682d8e0755faefa2266c43341d.scope. Dec 13 14:34:58.684484 env[1718]: time="2024-12-13T14:34:58.684323602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:6eceec34-97ac-412e-bb37-f73c77bd559c,Namespace:default,Attempt:0,} returns sandbox id \"031b0fc0e4c9e4fd3adc4233b8d3b4e491f184682d8e0755faefa2266c43341d\"" Dec 13 14:34:58.688146 env[1718]: time="2024-12-13T14:34:58.688106542Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 14:34:59.545941 kubelet[2073]: E1213 14:34:59.545862 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:34:59.945077 systemd-networkd[1453]: lxc85e1a786d821: Gained IPv6LL Dec 13 14:35:00.547148 kubelet[2073]: E1213 14:35:00.547040 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:01.550884 kubelet[2073]: E1213 14:35:01.550816 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:02.551644 kubelet[2073]: E1213 14:35:02.551566 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:03.279921 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3264462462.mount: Deactivated successfully. Dec 13 14:35:03.553433 kubelet[2073]: E1213 14:35:03.552989 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:04.554194 kubelet[2073]: E1213 14:35:04.554058 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:05.555035 kubelet[2073]: E1213 14:35:05.554958 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:06.555266 kubelet[2073]: E1213 14:35:06.555164 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:07.296771 env[1718]: time="2024-12-13T14:35:07.296611446Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:07.302125 env[1718]: time="2024-12-13T14:35:07.302069013Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:07.307333 env[1718]: time="2024-12-13T14:35:07.307272726Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:07.309576 env[1718]: time="2024-12-13T14:35:07.309527865Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:07.310408 env[1718]: time="2024-12-13T14:35:07.310364471Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 14:35:07.314930 env[1718]: time="2024-12-13T14:35:07.314863202Z" level=info msg="CreateContainer within sandbox \"031b0fc0e4c9e4fd3adc4233b8d3b4e491f184682d8e0755faefa2266c43341d\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 14:35:07.335514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177115192.mount: Deactivated successfully. Dec 13 14:35:07.354998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount315435811.mount: Deactivated successfully. Dec 13 14:35:07.370895 env[1718]: time="2024-12-13T14:35:07.370807543Z" level=info msg="CreateContainer within sandbox \"031b0fc0e4c9e4fd3adc4233b8d3b4e491f184682d8e0755faefa2266c43341d\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ed69cd142a2a2cbaa62c157cac787d1e38c0781c9ea28e8bf7d09562201e57a0\"" Dec 13 14:35:07.372816 env[1718]: time="2024-12-13T14:35:07.372765395Z" level=info msg="StartContainer for \"ed69cd142a2a2cbaa62c157cac787d1e38c0781c9ea28e8bf7d09562201e57a0\"" Dec 13 14:35:07.419422 systemd[1]: Started cri-containerd-ed69cd142a2a2cbaa62c157cac787d1e38c0781c9ea28e8bf7d09562201e57a0.scope. Dec 13 14:35:07.480158 env[1718]: time="2024-12-13T14:35:07.480099881Z" level=info msg="StartContainer for \"ed69cd142a2a2cbaa62c157cac787d1e38c0781c9ea28e8bf7d09562201e57a0\" returns successfully" Dec 13 14:35:07.556311 kubelet[2073]: E1213 14:35:07.556114 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:08.217977 kubelet[2073]: I1213 14:35:08.217894 2073 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.592348335 podStartE2EDuration="11.217856737s" podCreationTimestamp="2024-12-13 14:34:57 +0000 UTC" firstStartedPulling="2024-12-13 14:34:58.686834215 +0000 UTC m=+51.046063802" lastFinishedPulling="2024-12-13 14:35:07.312342604 +0000 UTC m=+59.671572204" observedRunningTime="2024-12-13 14:35:08.217502191 +0000 UTC m=+60.576731798" watchObservedRunningTime="2024-12-13 14:35:08.217856737 +0000 UTC m=+60.577086353" Dec 13 14:35:08.483551 kubelet[2073]: E1213 14:35:08.483416 2073 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:08.556698 kubelet[2073]: E1213 14:35:08.556649 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:09.557693 kubelet[2073]: E1213 14:35:09.557626 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:10.558554 kubelet[2073]: E1213 14:35:10.558353 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:11.559636 kubelet[2073]: E1213 14:35:11.559576 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:12.560507 kubelet[2073]: E1213 14:35:12.560445 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:13.561471 kubelet[2073]: E1213 14:35:13.561413 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:14.561651 kubelet[2073]: E1213 14:35:14.561586 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:15.561957 kubelet[2073]: E1213 14:35:15.561799 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:16.563313 kubelet[2073]: E1213 14:35:16.563091 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:17.564424 kubelet[2073]: E1213 14:35:17.564364 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:17.859502 systemd[1]: Created slice kubepods-besteffort-pod1f2a6c30_e64e_4c06_8fe8_04c842f31e5d.slice. Dec 13 14:35:17.906819 kubelet[2073]: I1213 14:35:17.906760 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-237d77b6-2a90-4590-8cb2-f93e12ac1992\" (UniqueName: \"kubernetes.io/nfs/1f2a6c30-e64e-4c06-8fe8-04c842f31e5d-pvc-237d77b6-2a90-4590-8cb2-f93e12ac1992\") pod \"test-pod-1\" (UID: \"1f2a6c30-e64e-4c06-8fe8-04c842f31e5d\") " pod="default/test-pod-1" Dec 13 14:35:17.906819 kubelet[2073]: I1213 14:35:17.906812 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8h77\" (UniqueName: \"kubernetes.io/projected/1f2a6c30-e64e-4c06-8fe8-04c842f31e5d-kube-api-access-x8h77\") pod \"test-pod-1\" (UID: \"1f2a6c30-e64e-4c06-8fe8-04c842f31e5d\") " pod="default/test-pod-1" Dec 13 14:35:18.085910 kernel: FS-Cache: Loaded Dec 13 14:35:18.178240 kernel: RPC: Registered named UNIX socket transport module. Dec 13 14:35:18.178421 kernel: RPC: Registered udp transport module. Dec 13 14:35:18.178456 kernel: RPC: Registered tcp transport module. Dec 13 14:35:18.178484 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 14:35:18.256918 kernel: FS-Cache: Netfs 'nfs' registered for caching Dec 13 14:35:18.465170 kernel: NFS: Registering the id_resolver key type Dec 13 14:35:18.465344 kernel: Key type id_resolver registered Dec 13 14:35:18.465387 kernel: Key type id_legacy registered Dec 13 14:35:18.520516 nfsidmap[3635]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 14:35:18.524800 nfsidmap[3636]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Dec 13 14:35:18.571868 kubelet[2073]: E1213 14:35:18.571735 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:18.772425 env[1718]: time="2024-12-13T14:35:18.772277783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1f2a6c30-e64e-4c06-8fe8-04c842f31e5d,Namespace:default,Attempt:0,}" Dec 13 14:35:18.817273 (udev-worker)[3623]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:35:18.820272 systemd-networkd[1453]: lxc2d82e74bcf86: Link UP Dec 13 14:35:18.824089 (udev-worker)[3632]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:35:18.833547 kernel: eth0: renamed from tmp0037e Dec 13 14:35:18.843234 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:35:18.843394 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2d82e74bcf86: link becomes ready Dec 13 14:35:18.843434 systemd-networkd[1453]: lxc2d82e74bcf86: Gained carrier Dec 13 14:35:19.107926 env[1718]: time="2024-12-13T14:35:19.107732017Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:35:19.107926 env[1718]: time="2024-12-13T14:35:19.107780501Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:35:19.107926 env[1718]: time="2024-12-13T14:35:19.107796186Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:35:19.108432 env[1718]: time="2024-12-13T14:35:19.108351692Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0037e0878dcb1db01bc7fa8e85abbacf7603d717fcb6d7af80106a5feb2930e4 pid=3664 runtime=io.containerd.runc.v2 Dec 13 14:35:19.134618 systemd[1]: Started cri-containerd-0037e0878dcb1db01bc7fa8e85abbacf7603d717fcb6d7af80106a5feb2930e4.scope. Dec 13 14:35:19.182491 env[1718]: time="2024-12-13T14:35:19.182442215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1f2a6c30-e64e-4c06-8fe8-04c842f31e5d,Namespace:default,Attempt:0,} returns sandbox id \"0037e0878dcb1db01bc7fa8e85abbacf7603d717fcb6d7af80106a5feb2930e4\"" Dec 13 14:35:19.185011 env[1718]: time="2024-12-13T14:35:19.184945808Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 14:35:19.493755 env[1718]: time="2024-12-13T14:35:19.493616559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:19.495639 env[1718]: time="2024-12-13T14:35:19.495591687Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:19.497738 env[1718]: time="2024-12-13T14:35:19.497698629Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:19.502387 env[1718]: time="2024-12-13T14:35:19.502332544Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:19.503229 env[1718]: time="2024-12-13T14:35:19.503182891Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 14:35:19.510585 env[1718]: time="2024-12-13T14:35:19.510534839Z" level=info msg="CreateContainer within sandbox \"0037e0878dcb1db01bc7fa8e85abbacf7603d717fcb6d7af80106a5feb2930e4\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 14:35:19.534822 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4125918772.mount: Deactivated successfully. Dec 13 14:35:19.556578 env[1718]: time="2024-12-13T14:35:19.556513349Z" level=info msg="CreateContainer within sandbox \"0037e0878dcb1db01bc7fa8e85abbacf7603d717fcb6d7af80106a5feb2930e4\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"367cfb0e4d85b153f5d3901b7fd254e418f96f889ea799825c8328501d0b69b7\"" Dec 13 14:35:19.557553 env[1718]: time="2024-12-13T14:35:19.557512570Z" level=info msg="StartContainer for \"367cfb0e4d85b153f5d3901b7fd254e418f96f889ea799825c8328501d0b69b7\"" Dec 13 14:35:19.574499 kubelet[2073]: E1213 14:35:19.574432 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:19.601933 systemd[1]: Started cri-containerd-367cfb0e4d85b153f5d3901b7fd254e418f96f889ea799825c8328501d0b69b7.scope. Dec 13 14:35:19.665329 env[1718]: time="2024-12-13T14:35:19.665186281Z" level=info msg="StartContainer for \"367cfb0e4d85b153f5d3901b7fd254e418f96f889ea799825c8328501d0b69b7\" returns successfully" Dec 13 14:35:20.119365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2861237979.mount: Deactivated successfully. Dec 13 14:35:20.127282 systemd-networkd[1453]: lxc2d82e74bcf86: Gained IPv6LL Dec 13 14:35:20.575375 kubelet[2073]: E1213 14:35:20.575259 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:21.575582 kubelet[2073]: E1213 14:35:21.575523 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:22.576555 kubelet[2073]: E1213 14:35:22.576496 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:23.577400 kubelet[2073]: E1213 14:35:23.577339 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:23.913280 kubelet[2073]: I1213 14:35:23.912860 2073 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=25.592423047 podStartE2EDuration="25.912831877s" podCreationTimestamp="2024-12-13 14:34:58 +0000 UTC" firstStartedPulling="2024-12-13 14:35:19.184259583 +0000 UTC m=+71.543489167" lastFinishedPulling="2024-12-13 14:35:19.504668409 +0000 UTC m=+71.863897997" observedRunningTime="2024-12-13 14:35:20.281773454 +0000 UTC m=+72.641003058" watchObservedRunningTime="2024-12-13 14:35:23.912831877 +0000 UTC m=+76.272061482" Dec 13 14:35:23.945098 systemd[1]: run-containerd-runc-k8s.io-99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3-runc.mwKMa1.mount: Deactivated successfully. Dec 13 14:35:24.002482 env[1718]: time="2024-12-13T14:35:24.002395899Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:35:24.023698 env[1718]: time="2024-12-13T14:35:24.023651562Z" level=info msg="StopContainer for \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\" with timeout 2 (s)" Dec 13 14:35:24.024019 env[1718]: time="2024-12-13T14:35:24.023981427Z" level=info msg="Stop container \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\" with signal terminated" Dec 13 14:35:24.033049 systemd-networkd[1453]: lxc_health: Link DOWN Dec 13 14:35:24.033059 systemd-networkd[1453]: lxc_health: Lost carrier Dec 13 14:35:24.208026 systemd[1]: cri-containerd-99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3.scope: Deactivated successfully. Dec 13 14:35:24.208393 systemd[1]: cri-containerd-99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3.scope: Consumed 8.684s CPU time. Dec 13 14:35:24.282005 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3-rootfs.mount: Deactivated successfully. Dec 13 14:35:24.330625 env[1718]: time="2024-12-13T14:35:24.330558929Z" level=info msg="shim disconnected" id=99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3 Dec 13 14:35:24.330625 env[1718]: time="2024-12-13T14:35:24.330627313Z" level=warning msg="cleaning up after shim disconnected" id=99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3 namespace=k8s.io Dec 13 14:35:24.330625 env[1718]: time="2024-12-13T14:35:24.330640025Z" level=info msg="cleaning up dead shim" Dec 13 14:35:24.342005 env[1718]: time="2024-12-13T14:35:24.341953705Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3796 runtime=io.containerd.runc.v2\n" Dec 13 14:35:24.350074 env[1718]: time="2024-12-13T14:35:24.350010190Z" level=info msg="StopContainer for \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\" returns successfully" Dec 13 14:35:24.351153 env[1718]: time="2024-12-13T14:35:24.351110268Z" level=info msg="StopPodSandbox for \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\"" Dec 13 14:35:24.351300 env[1718]: time="2024-12-13T14:35:24.351191098Z" level=info msg="Container to stop \"5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:24.351300 env[1718]: time="2024-12-13T14:35:24.351211752Z" level=info msg="Container to stop \"abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:24.351300 env[1718]: time="2024-12-13T14:35:24.351226256Z" level=info msg="Container to stop \"36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:24.351300 env[1718]: time="2024-12-13T14:35:24.351241775Z" level=info msg="Container to stop \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:24.351300 env[1718]: time="2024-12-13T14:35:24.351257056Z" level=info msg="Container to stop \"f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:24.354003 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858-shm.mount: Deactivated successfully. Dec 13 14:35:24.369461 systemd[1]: cri-containerd-ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858.scope: Deactivated successfully. Dec 13 14:35:24.406244 env[1718]: time="2024-12-13T14:35:24.406184385Z" level=info msg="shim disconnected" id=ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858 Dec 13 14:35:24.407136 env[1718]: time="2024-12-13T14:35:24.407100046Z" level=warning msg="cleaning up after shim disconnected" id=ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858 namespace=k8s.io Dec 13 14:35:24.407283 env[1718]: time="2024-12-13T14:35:24.407266063Z" level=info msg="cleaning up dead shim" Dec 13 14:35:24.417069 env[1718]: time="2024-12-13T14:35:24.417007412Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3829 runtime=io.containerd.runc.v2\n" Dec 13 14:35:24.417818 env[1718]: time="2024-12-13T14:35:24.417779323Z" level=info msg="TearDown network for sandbox \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\" successfully" Dec 13 14:35:24.417818 env[1718]: time="2024-12-13T14:35:24.417813069Z" level=info msg="StopPodSandbox for \"ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858\" returns successfully" Dec 13 14:35:24.565022 kubelet[2073]: I1213 14:35:24.564972 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-etc-cni-netd\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.565272 kubelet[2073]: I1213 14:35:24.565032 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9536e26b-de09-4314-ae82-cb9537a031ba-cilium-config-path\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.565272 kubelet[2073]: I1213 14:35:24.565062 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bclx6\" (UniqueName: \"kubernetes.io/projected/9536e26b-de09-4314-ae82-cb9537a031ba-kube-api-access-bclx6\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.565272 kubelet[2073]: I1213 14:35:24.565086 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-cni-path\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.565272 kubelet[2073]: I1213 14:35:24.565107 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-lib-modules\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.565272 kubelet[2073]: I1213 14:35:24.565125 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-host-proc-sys-net\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.565272 kubelet[2073]: I1213 14:35:24.565158 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9536e26b-de09-4314-ae82-cb9537a031ba-clustermesh-secrets\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.565571 kubelet[2073]: I1213 14:35:24.565181 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-host-proc-sys-kernel\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.565571 kubelet[2073]: I1213 14:35:24.565205 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-cilium-cgroup\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.565571 kubelet[2073]: I1213 14:35:24.565227 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9536e26b-de09-4314-ae82-cb9537a031ba-hubble-tls\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.565571 kubelet[2073]: I1213 14:35:24.565254 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-bpf-maps\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.565571 kubelet[2073]: I1213 14:35:24.565274 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-cilium-run\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.565571 kubelet[2073]: I1213 14:35:24.565298 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-hostproc\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.565922 kubelet[2073]: I1213 14:35:24.565321 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-xtables-lock\") pod \"9536e26b-de09-4314-ae82-cb9537a031ba\" (UID: \"9536e26b-de09-4314-ae82-cb9537a031ba\") " Dec 13 14:35:24.566865 kubelet[2073]: I1213 14:35:24.566818 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.567014 kubelet[2073]: I1213 14:35:24.566931 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.567014 kubelet[2073]: I1213 14:35:24.566960 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.567541 kubelet[2073]: I1213 14:35:24.567507 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.567703 kubelet[2073]: I1213 14:35:24.567680 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.567788 kubelet[2073]: I1213 14:35:24.567714 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.567788 kubelet[2073]: I1213 14:35:24.567737 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-hostproc" (OuterVolumeSpecName: "hostproc") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.567788 kubelet[2073]: I1213 14:35:24.567760 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-cni-path" (OuterVolumeSpecName: "cni-path") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.568514 kubelet[2073]: I1213 14:35:24.568489 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.568608 kubelet[2073]: I1213 14:35:24.568529 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.572617 kubelet[2073]: I1213 14:35:24.572460 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9536e26b-de09-4314-ae82-cb9537a031ba-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:35:24.575497 kubelet[2073]: I1213 14:35:24.575448 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9536e26b-de09-4314-ae82-cb9537a031ba-kube-api-access-bclx6" (OuterVolumeSpecName: "kube-api-access-bclx6") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "kube-api-access-bclx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:35:24.578530 kubelet[2073]: E1213 14:35:24.578482 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:24.579303 kubelet[2073]: I1213 14:35:24.579268 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9536e26b-de09-4314-ae82-cb9537a031ba-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:35:24.579662 kubelet[2073]: I1213 14:35:24.579634 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9536e26b-de09-4314-ae82-cb9537a031ba-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9536e26b-de09-4314-ae82-cb9537a031ba" (UID: "9536e26b-de09-4314-ae82-cb9537a031ba"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:35:24.666173 kubelet[2073]: I1213 14:35:24.666113 2073 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-host-proc-sys-net\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.666173 kubelet[2073]: I1213 14:35:24.666157 2073 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-lib-modules\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.666173 kubelet[2073]: I1213 14:35:24.666171 2073 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9536e26b-de09-4314-ae82-cb9537a031ba-hubble-tls\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.666173 kubelet[2073]: I1213 14:35:24.666182 2073 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-bpf-maps\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.666604 kubelet[2073]: I1213 14:35:24.666193 2073 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9536e26b-de09-4314-ae82-cb9537a031ba-clustermesh-secrets\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.666604 kubelet[2073]: I1213 14:35:24.666207 2073 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-host-proc-sys-kernel\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.666604 kubelet[2073]: I1213 14:35:24.666218 2073 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-cilium-cgroup\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.666604 kubelet[2073]: I1213 14:35:24.666235 2073 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-cilium-run\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.666604 kubelet[2073]: I1213 14:35:24.666245 2073 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-xtables-lock\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.666604 kubelet[2073]: I1213 14:35:24.666254 2073 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-hostproc\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.666604 kubelet[2073]: I1213 14:35:24.666265 2073 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9536e26b-de09-4314-ae82-cb9537a031ba-cilium-config-path\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.666604 kubelet[2073]: I1213 14:35:24.666275 2073 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bclx6\" (UniqueName: \"kubernetes.io/projected/9536e26b-de09-4314-ae82-cb9537a031ba-kube-api-access-bclx6\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.666808 kubelet[2073]: I1213 14:35:24.666285 2073 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-cni-path\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.666808 kubelet[2073]: I1213 14:35:24.666297 2073 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9536e26b-de09-4314-ae82-cb9537a031ba-etc-cni-netd\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:24.842829 systemd[1]: Removed slice kubepods-burstable-pod9536e26b_de09_4314_ae82_cb9537a031ba.slice. Dec 13 14:35:24.843105 systemd[1]: kubepods-burstable-pod9536e26b_de09_4314_ae82_cb9537a031ba.slice: Consumed 8.821s CPU time. Dec 13 14:35:24.937865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ea85b797c35ee0dfa1fdf905717f9898f192fb0dd573a2b75065c874cb1bb858-rootfs.mount: Deactivated successfully. Dec 13 14:35:24.938029 systemd[1]: var-lib-kubelet-pods-9536e26b\x2dde09\x2d4314\x2dae82\x2dcb9537a031ba-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbclx6.mount: Deactivated successfully. Dec 13 14:35:24.938123 systemd[1]: var-lib-kubelet-pods-9536e26b\x2dde09\x2d4314\x2dae82\x2dcb9537a031ba-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:35:24.938203 systemd[1]: var-lib-kubelet-pods-9536e26b\x2dde09\x2d4314\x2dae82\x2dcb9537a031ba-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:35:25.291923 kubelet[2073]: I1213 14:35:25.291763 2073 scope.go:117] "RemoveContainer" containerID="99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3" Dec 13 14:35:25.305748 env[1718]: time="2024-12-13T14:35:25.305671220Z" level=info msg="RemoveContainer for \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\"" Dec 13 14:35:25.316571 env[1718]: time="2024-12-13T14:35:25.316511376Z" level=info msg="RemoveContainer for \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\" returns successfully" Dec 13 14:35:25.316915 kubelet[2073]: I1213 14:35:25.316889 2073 scope.go:117] "RemoveContainer" containerID="36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a" Dec 13 14:35:25.318511 env[1718]: time="2024-12-13T14:35:25.318463479Z" level=info msg="RemoveContainer for \"36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a\"" Dec 13 14:35:25.323747 env[1718]: time="2024-12-13T14:35:25.323692490Z" level=info msg="RemoveContainer for \"36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a\" returns successfully" Dec 13 14:35:25.325666 kubelet[2073]: I1213 14:35:25.325311 2073 scope.go:117] "RemoveContainer" containerID="abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99" Dec 13 14:35:25.332405 env[1718]: time="2024-12-13T14:35:25.329684094Z" level=info msg="RemoveContainer for \"abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99\"" Dec 13 14:35:25.342373 env[1718]: time="2024-12-13T14:35:25.342316394Z" level=info msg="RemoveContainer for \"abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99\" returns successfully" Dec 13 14:35:25.342741 kubelet[2073]: I1213 14:35:25.342707 2073 scope.go:117] "RemoveContainer" containerID="5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720" Dec 13 14:35:25.349233 env[1718]: time="2024-12-13T14:35:25.349179352Z" level=info msg="RemoveContainer for \"5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720\"" Dec 13 14:35:25.363405 env[1718]: time="2024-12-13T14:35:25.363332561Z" level=info msg="RemoveContainer for \"5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720\" returns successfully" Dec 13 14:35:25.363811 kubelet[2073]: I1213 14:35:25.363772 2073 scope.go:117] "RemoveContainer" containerID="f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348" Dec 13 14:35:25.371980 env[1718]: time="2024-12-13T14:35:25.371841650Z" level=info msg="RemoveContainer for \"f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348\"" Dec 13 14:35:25.377771 env[1718]: time="2024-12-13T14:35:25.377709823Z" level=info msg="RemoveContainer for \"f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348\" returns successfully" Dec 13 14:35:25.378854 kubelet[2073]: I1213 14:35:25.378677 2073 scope.go:117] "RemoveContainer" containerID="99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3" Dec 13 14:35:25.379994 env[1718]: time="2024-12-13T14:35:25.379776890Z" level=error msg="ContainerStatus for \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\": not found" Dec 13 14:35:25.380288 kubelet[2073]: E1213 14:35:25.380260 2073 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\": not found" containerID="99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3" Dec 13 14:35:25.380512 kubelet[2073]: I1213 14:35:25.380429 2073 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3"} err="failed to get container status \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\": rpc error: code = NotFound desc = an error occurred when try to find container \"99606c8be4a38c89560ba2e46b3e7a21b2ba21eb4616403fa244b987502be8c3\": not found" Dec 13 14:35:25.380602 kubelet[2073]: I1213 14:35:25.380518 2073 scope.go:117] "RemoveContainer" containerID="36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a" Dec 13 14:35:25.383015 env[1718]: time="2024-12-13T14:35:25.382915287Z" level=error msg="ContainerStatus for \"36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a\": not found" Dec 13 14:35:25.383224 kubelet[2073]: E1213 14:35:25.383189 2073 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a\": not found" containerID="36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a" Dec 13 14:35:25.383314 kubelet[2073]: I1213 14:35:25.383232 2073 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a"} err="failed to get container status \"36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a\": rpc error: code = NotFound desc = an error occurred when try to find container \"36dcd5999f2fc40a2307a551c7d048b1dfe323bec6a17f010dc9f8537874321a\": not found" Dec 13 14:35:25.383314 kubelet[2073]: I1213 14:35:25.383268 2073 scope.go:117] "RemoveContainer" containerID="abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99" Dec 13 14:35:25.383692 env[1718]: time="2024-12-13T14:35:25.383623920Z" level=error msg="ContainerStatus for \"abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99\": not found" Dec 13 14:35:25.383895 kubelet[2073]: E1213 14:35:25.383849 2073 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99\": not found" containerID="abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99" Dec 13 14:35:25.383969 kubelet[2073]: I1213 14:35:25.383903 2073 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99"} err="failed to get container status \"abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99\": rpc error: code = NotFound desc = an error occurred when try to find container \"abbde658595c7725088a435438588f87db67066be48c8b80f5a961ad0076cf99\": not found" Dec 13 14:35:25.383969 kubelet[2073]: I1213 14:35:25.383927 2073 scope.go:117] "RemoveContainer" containerID="5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720" Dec 13 14:35:25.384365 env[1718]: time="2024-12-13T14:35:25.384307815Z" level=error msg="ContainerStatus for \"5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720\": not found" Dec 13 14:35:25.384564 kubelet[2073]: E1213 14:35:25.384539 2073 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720\": not found" containerID="5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720" Dec 13 14:35:25.384644 kubelet[2073]: I1213 14:35:25.384568 2073 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720"} err="failed to get container status \"5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720\": rpc error: code = NotFound desc = an error occurred when try to find container \"5170809552aa2e0da0759d3d1984aa60d0b2ffcee47be126805edc85dbd7c720\": not found" Dec 13 14:35:25.384644 kubelet[2073]: I1213 14:35:25.384590 2073 scope.go:117] "RemoveContainer" containerID="f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348" Dec 13 14:35:25.384859 env[1718]: time="2024-12-13T14:35:25.384792931Z" level=error msg="ContainerStatus for \"f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348\": not found" Dec 13 14:35:25.384999 kubelet[2073]: E1213 14:35:25.384974 2073 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348\": not found" containerID="f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348" Dec 13 14:35:25.385077 kubelet[2073]: I1213 14:35:25.385009 2073 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348"} err="failed to get container status \"f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348\": rpc error: code = NotFound desc = an error occurred when try to find container \"f13cd516611e85f1fbcd45337c70665db50461d160e383ed8fb78397aff91348\": not found" Dec 13 14:35:25.578891 kubelet[2073]: E1213 14:35:25.578721 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:26.579755 kubelet[2073]: E1213 14:35:26.579698 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:26.811629 kubelet[2073]: I1213 14:35:26.811579 2073 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9536e26b-de09-4314-ae82-cb9537a031ba" path="/var/lib/kubelet/pods/9536e26b-de09-4314-ae82-cb9537a031ba/volumes" Dec 13 14:35:27.054723 kubelet[2073]: E1213 14:35:27.054668 2073 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9536e26b-de09-4314-ae82-cb9537a031ba" containerName="mount-cgroup" Dec 13 14:35:27.054723 kubelet[2073]: E1213 14:35:27.054701 2073 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9536e26b-de09-4314-ae82-cb9537a031ba" containerName="apply-sysctl-overwrites" Dec 13 14:35:27.054723 kubelet[2073]: E1213 14:35:27.054714 2073 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9536e26b-de09-4314-ae82-cb9537a031ba" containerName="mount-bpf-fs" Dec 13 14:35:27.054723 kubelet[2073]: E1213 14:35:27.054723 2073 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9536e26b-de09-4314-ae82-cb9537a031ba" containerName="clean-cilium-state" Dec 13 14:35:27.054723 kubelet[2073]: E1213 14:35:27.054730 2073 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9536e26b-de09-4314-ae82-cb9537a031ba" containerName="cilium-agent" Dec 13 14:35:27.055146 kubelet[2073]: I1213 14:35:27.054756 2073 memory_manager.go:354] "RemoveStaleState removing state" podUID="9536e26b-de09-4314-ae82-cb9537a031ba" containerName="cilium-agent" Dec 13 14:35:27.061081 systemd[1]: Created slice kubepods-besteffort-poddd9dc9e7_74dd_4c3f_80f4_a4216f9845f3.slice. Dec 13 14:35:27.071257 systemd[1]: Created slice kubepods-burstable-poddedea27a_6ad2_4886_9793_efdbf9cb42ac.slice. Dec 13 14:35:27.183962 kubelet[2073]: I1213 14:35:27.183903 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64gv8\" (UniqueName: \"kubernetes.io/projected/dd9dc9e7-74dd-4c3f-80f4-a4216f9845f3-kube-api-access-64gv8\") pod \"cilium-operator-5d85765b45-9xpb2\" (UID: \"dd9dc9e7-74dd-4c3f-80f4-a4216f9845f3\") " pod="kube-system/cilium-operator-5d85765b45-9xpb2" Dec 13 14:35:27.183962 kubelet[2073]: I1213 14:35:27.183962 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-etc-cni-netd\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184246 kubelet[2073]: I1213 14:35:27.183990 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbfrp\" (UniqueName: \"kubernetes.io/projected/dedea27a-6ad2-4886-9793-efdbf9cb42ac-kube-api-access-jbfrp\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184246 kubelet[2073]: I1213 14:35:27.184015 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-bpf-maps\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184246 kubelet[2073]: I1213 14:35:27.184035 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-lib-modules\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184246 kubelet[2073]: I1213 14:35:27.184064 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-config-path\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184246 kubelet[2073]: I1213 14:35:27.184089 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-ipsec-secrets\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184481 kubelet[2073]: I1213 14:35:27.184110 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd9dc9e7-74dd-4c3f-80f4-a4216f9845f3-cilium-config-path\") pod \"cilium-operator-5d85765b45-9xpb2\" (UID: \"dd9dc9e7-74dd-4c3f-80f4-a4216f9845f3\") " pod="kube-system/cilium-operator-5d85765b45-9xpb2" Dec 13 14:35:27.184481 kubelet[2073]: I1213 14:35:27.184134 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-cgroup\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184481 kubelet[2073]: I1213 14:35:27.184156 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cni-path\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184481 kubelet[2073]: I1213 14:35:27.184178 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-xtables-lock\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184481 kubelet[2073]: I1213 14:35:27.184201 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dedea27a-6ad2-4886-9793-efdbf9cb42ac-clustermesh-secrets\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184639 kubelet[2073]: I1213 14:35:27.184223 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-host-proc-sys-net\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184639 kubelet[2073]: I1213 14:35:27.184247 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-host-proc-sys-kernel\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184639 kubelet[2073]: I1213 14:35:27.184272 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dedea27a-6ad2-4886-9793-efdbf9cb42ac-hubble-tls\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184639 kubelet[2073]: I1213 14:35:27.184296 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-run\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.184639 kubelet[2073]: I1213 14:35:27.184318 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-hostproc\") pod \"cilium-rt875\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " pod="kube-system/cilium-rt875" Dec 13 14:35:27.570085 env[1718]: time="2024-12-13T14:35:27.570033803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rt875,Uid:dedea27a-6ad2-4886-9793-efdbf9cb42ac,Namespace:kube-system,Attempt:0,}" Dec 13 14:35:27.580241 kubelet[2073]: E1213 14:35:27.580186 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:27.599406 env[1718]: time="2024-12-13T14:35:27.599263133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:35:27.599406 env[1718]: time="2024-12-13T14:35:27.599333188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:35:27.599406 env[1718]: time="2024-12-13T14:35:27.599349593Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:35:27.601591 env[1718]: time="2024-12-13T14:35:27.601482483Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f2e5dcdebc3709047257f8c82a7be6b6810a847d5070db5962dac5f85d21f36c pid=3859 runtime=io.containerd.runc.v2 Dec 13 14:35:27.632003 systemd[1]: Started cri-containerd-f2e5dcdebc3709047257f8c82a7be6b6810a847d5070db5962dac5f85d21f36c.scope. Dec 13 14:35:27.675172 env[1718]: time="2024-12-13T14:35:27.674691057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-9xpb2,Uid:dd9dc9e7-74dd-4c3f-80f4-a4216f9845f3,Namespace:kube-system,Attempt:0,}" Dec 13 14:35:27.707599 env[1718]: time="2024-12-13T14:35:27.707531492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rt875,Uid:dedea27a-6ad2-4886-9793-efdbf9cb42ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"f2e5dcdebc3709047257f8c82a7be6b6810a847d5070db5962dac5f85d21f36c\"" Dec 13 14:35:27.711484 env[1718]: time="2024-12-13T14:35:27.711431836Z" level=info msg="CreateContainer within sandbox \"f2e5dcdebc3709047257f8c82a7be6b6810a847d5070db5962dac5f85d21f36c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:35:27.716569 env[1718]: time="2024-12-13T14:35:27.716500528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:35:27.716749 env[1718]: time="2024-12-13T14:35:27.716585654Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:35:27.716749 env[1718]: time="2024-12-13T14:35:27.716617332Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:35:27.717187 env[1718]: time="2024-12-13T14:35:27.717109550Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eda6532765a05ace10a0799b8068b2862e8eb1db72dfebf53ec23f2ca14ef07c pid=3903 runtime=io.containerd.runc.v2 Dec 13 14:35:27.737549 systemd[1]: Started cri-containerd-eda6532765a05ace10a0799b8068b2862e8eb1db72dfebf53ec23f2ca14ef07c.scope. Dec 13 14:35:27.740443 env[1718]: time="2024-12-13T14:35:27.740393778Z" level=info msg="CreateContainer within sandbox \"f2e5dcdebc3709047257f8c82a7be6b6810a847d5070db5962dac5f85d21f36c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e\"" Dec 13 14:35:27.741159 env[1718]: time="2024-12-13T14:35:27.741122807Z" level=info msg="StartContainer for \"37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e\"" Dec 13 14:35:27.772533 systemd[1]: Started cri-containerd-37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e.scope. Dec 13 14:35:27.797430 systemd[1]: cri-containerd-37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e.scope: Deactivated successfully. Dec 13 14:35:27.828943 env[1718]: time="2024-12-13T14:35:27.827126880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-9xpb2,Uid:dd9dc9e7-74dd-4c3f-80f4-a4216f9845f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"eda6532765a05ace10a0799b8068b2862e8eb1db72dfebf53ec23f2ca14ef07c\"" Dec 13 14:35:27.830048 env[1718]: time="2024-12-13T14:35:27.829960538Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:35:27.830694 env[1718]: time="2024-12-13T14:35:27.830641190Z" level=info msg="shim disconnected" id=37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e Dec 13 14:35:27.830694 env[1718]: time="2024-12-13T14:35:27.830685386Z" level=warning msg="cleaning up after shim disconnected" id=37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e namespace=k8s.io Dec 13 14:35:27.830839 env[1718]: time="2024-12-13T14:35:27.830697671Z" level=info msg="cleaning up dead shim" Dec 13 14:35:27.843039 env[1718]: time="2024-12-13T14:35:27.842977956Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3962 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:35:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:35:27.843611 env[1718]: time="2024-12-13T14:35:27.843277308Z" level=error msg="copy shim log" error="read /proc/self/fd/84: file already closed" Dec 13 14:35:27.843973 env[1718]: time="2024-12-13T14:35:27.843924590Z" level=error msg="Failed to pipe stderr of container \"37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e\"" error="reading from a closed fifo" Dec 13 14:35:27.844106 env[1718]: time="2024-12-13T14:35:27.844071208Z" level=error msg="Failed to pipe stdout of container \"37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e\"" error="reading from a closed fifo" Dec 13 14:35:27.847266 env[1718]: time="2024-12-13T14:35:27.847204670Z" level=error msg="StartContainer for \"37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:35:27.847527 kubelet[2073]: E1213 14:35:27.847487 2073 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e" Dec 13 14:35:27.849142 kubelet[2073]: E1213 14:35:27.849100 2073 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 14:35:27.849142 kubelet[2073]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:35:27.849142 kubelet[2073]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:35:27.849142 kubelet[2073]: rm /hostbin/cilium-mount Dec 13 14:35:27.849308 kubelet[2073]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jbfrp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-rt875_kube-system(dedea27a-6ad2-4886-9793-efdbf9cb42ac): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:35:27.849308 kubelet[2073]: > logger="UnhandledError" Dec 13 14:35:27.850312 kubelet[2073]: E1213 14:35:27.850279 2073 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-rt875" podUID="dedea27a-6ad2-4886-9793-efdbf9cb42ac" Dec 13 14:35:28.319455 env[1718]: time="2024-12-13T14:35:28.319323242Z" level=info msg="StopPodSandbox for \"f2e5dcdebc3709047257f8c82a7be6b6810a847d5070db5962dac5f85d21f36c\"" Dec 13 14:35:28.319624 env[1718]: time="2024-12-13T14:35:28.319471769Z" level=info msg="Container to stop \"37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:28.322633 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f2e5dcdebc3709047257f8c82a7be6b6810a847d5070db5962dac5f85d21f36c-shm.mount: Deactivated successfully. Dec 13 14:35:28.332979 systemd[1]: cri-containerd-f2e5dcdebc3709047257f8c82a7be6b6810a847d5070db5962dac5f85d21f36c.scope: Deactivated successfully. Dec 13 14:35:28.364047 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2e5dcdebc3709047257f8c82a7be6b6810a847d5070db5962dac5f85d21f36c-rootfs.mount: Deactivated successfully. Dec 13 14:35:28.389049 env[1718]: time="2024-12-13T14:35:28.388988201Z" level=info msg="shim disconnected" id=f2e5dcdebc3709047257f8c82a7be6b6810a847d5070db5962dac5f85d21f36c Dec 13 14:35:28.389368 env[1718]: time="2024-12-13T14:35:28.389344185Z" level=warning msg="cleaning up after shim disconnected" id=f2e5dcdebc3709047257f8c82a7be6b6810a847d5070db5962dac5f85d21f36c namespace=k8s.io Dec 13 14:35:28.389450 env[1718]: time="2024-12-13T14:35:28.389437821Z" level=info msg="cleaning up dead shim" Dec 13 14:35:28.408739 env[1718]: time="2024-12-13T14:35:28.408683786Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3991 runtime=io.containerd.runc.v2\n" Dec 13 14:35:28.409106 env[1718]: time="2024-12-13T14:35:28.409071976Z" level=info msg="TearDown network for sandbox \"f2e5dcdebc3709047257f8c82a7be6b6810a847d5070db5962dac5f85d21f36c\" successfully" Dec 13 14:35:28.409211 env[1718]: time="2024-12-13T14:35:28.409105264Z" level=info msg="StopPodSandbox for \"f2e5dcdebc3709047257f8c82a7be6b6810a847d5070db5962dac5f85d21f36c\" returns successfully" Dec 13 14:35:28.482974 kubelet[2073]: E1213 14:35:28.482927 2073 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:28.499276 kubelet[2073]: I1213 14:35:28.499223 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-hostproc\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.499276 kubelet[2073]: I1213 14:35:28.499279 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-cgroup\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.499547 kubelet[2073]: I1213 14:35:28.499306 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-xtables-lock\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.499547 kubelet[2073]: I1213 14:35:28.499330 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-etc-cni-netd\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.499547 kubelet[2073]: I1213 14:35:28.499365 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-ipsec-secrets\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.499547 kubelet[2073]: I1213 14:35:28.499394 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-host-proc-sys-net\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.499547 kubelet[2073]: I1213 14:35:28.499418 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cni-path\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.499547 kubelet[2073]: I1213 14:35:28.499442 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dedea27a-6ad2-4886-9793-efdbf9cb42ac-clustermesh-secrets\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.499547 kubelet[2073]: I1213 14:35:28.499464 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-host-proc-sys-kernel\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.499547 kubelet[2073]: I1213 14:35:28.499486 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-bpf-maps\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.499547 kubelet[2073]: I1213 14:35:28.499509 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-lib-modules\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.500055 kubelet[2073]: I1213 14:35:28.499565 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dedea27a-6ad2-4886-9793-efdbf9cb42ac-hubble-tls\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.500055 kubelet[2073]: I1213 14:35:28.499592 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jbfrp\" (UniqueName: \"kubernetes.io/projected/dedea27a-6ad2-4886-9793-efdbf9cb42ac-kube-api-access-jbfrp\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.500055 kubelet[2073]: I1213 14:35:28.499619 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-config-path\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.500055 kubelet[2073]: I1213 14:35:28.499641 2073 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-run\") pod \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\" (UID: \"dedea27a-6ad2-4886-9793-efdbf9cb42ac\") " Dec 13 14:35:28.500055 kubelet[2073]: I1213 14:35:28.499723 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.500055 kubelet[2073]: I1213 14:35:28.499761 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.500055 kubelet[2073]: I1213 14:35:28.499785 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.500055 kubelet[2073]: I1213 14:35:28.499808 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.500840 kubelet[2073]: I1213 14:35:28.500447 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-hostproc" (OuterVolumeSpecName: "hostproc") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.500840 kubelet[2073]: I1213 14:35:28.500508 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.500840 kubelet[2073]: I1213 14:35:28.500531 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.500840 kubelet[2073]: I1213 14:35:28.500550 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cni-path" (OuterVolumeSpecName: "cni-path") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.501280 kubelet[2073]: I1213 14:35:28.501254 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.502352 kubelet[2073]: I1213 14:35:28.502327 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.509361 systemd[1]: var-lib-kubelet-pods-dedea27a\x2d6ad2\x2d4886\x2d9793\x2defdbf9cb42ac-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:35:28.520915 systemd[1]: var-lib-kubelet-pods-dedea27a\x2d6ad2\x2d4886\x2d9793\x2defdbf9cb42ac-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:35:28.522213 kubelet[2073]: I1213 14:35:28.522170 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:35:28.523288 kubelet[2073]: I1213 14:35:28.523247 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:35:28.523973 kubelet[2073]: I1213 14:35:28.523945 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dedea27a-6ad2-4886-9793-efdbf9cb42ac-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:35:28.527284 kubelet[2073]: I1213 14:35:28.527239 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedea27a-6ad2-4886-9793-efdbf9cb42ac-kube-api-access-jbfrp" (OuterVolumeSpecName: "kube-api-access-jbfrp") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "kube-api-access-jbfrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:35:28.527448 kubelet[2073]: I1213 14:35:28.527329 2073 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dedea27a-6ad2-4886-9793-efdbf9cb42ac-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dedea27a-6ad2-4886-9793-efdbf9cb42ac" (UID: "dedea27a-6ad2-4886-9793-efdbf9cb42ac"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:35:28.581456 kubelet[2073]: E1213 14:35:28.581318 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:28.600639 kubelet[2073]: I1213 14:35:28.600586 2073 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cni-path\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600639 kubelet[2073]: I1213 14:35:28.600634 2073 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dedea27a-6ad2-4886-9793-efdbf9cb42ac-clustermesh-secrets\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600639 kubelet[2073]: I1213 14:35:28.600648 2073 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-host-proc-sys-kernel\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600946 kubelet[2073]: I1213 14:35:28.600659 2073 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-bpf-maps\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600946 kubelet[2073]: I1213 14:35:28.600672 2073 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-lib-modules\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600946 kubelet[2073]: I1213 14:35:28.600682 2073 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dedea27a-6ad2-4886-9793-efdbf9cb42ac-hubble-tls\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600946 kubelet[2073]: I1213 14:35:28.600692 2073 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-jbfrp\" (UniqueName: \"kubernetes.io/projected/dedea27a-6ad2-4886-9793-efdbf9cb42ac-kube-api-access-jbfrp\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600946 kubelet[2073]: I1213 14:35:28.600704 2073 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-config-path\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600946 kubelet[2073]: I1213 14:35:28.600714 2073 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-run\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600946 kubelet[2073]: I1213 14:35:28.600727 2073 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-hostproc\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600946 kubelet[2073]: I1213 14:35:28.600736 2073 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-cgroup\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600946 kubelet[2073]: I1213 14:35:28.600747 2073 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-xtables-lock\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600946 kubelet[2073]: I1213 14:35:28.600757 2073 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-etc-cni-netd\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600946 kubelet[2073]: I1213 14:35:28.600768 2073 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dedea27a-6ad2-4886-9793-efdbf9cb42ac-cilium-ipsec-secrets\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.600946 kubelet[2073]: I1213 14:35:28.600779 2073 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dedea27a-6ad2-4886-9793-efdbf9cb42ac-host-proc-sys-net\") on node \"172.31.23.152\" DevicePath \"\"" Dec 13 14:35:28.724405 kubelet[2073]: E1213 14:35:28.724335 2073 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:35:28.833816 systemd[1]: Removed slice kubepods-burstable-poddedea27a_6ad2_4886_9793_efdbf9cb42ac.slice. Dec 13 14:35:29.301345 systemd[1]: var-lib-kubelet-pods-dedea27a\x2d6ad2\x2d4886\x2d9793\x2defdbf9cb42ac-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djbfrp.mount: Deactivated successfully. Dec 13 14:35:29.301480 systemd[1]: var-lib-kubelet-pods-dedea27a\x2d6ad2\x2d4886\x2d9793\x2defdbf9cb42ac-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:35:29.324329 kubelet[2073]: I1213 14:35:29.324302 2073 scope.go:117] "RemoveContainer" containerID="37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e" Dec 13 14:35:29.331487 env[1718]: time="2024-12-13T14:35:29.331333973Z" level=info msg="RemoveContainer for \"37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e\"" Dec 13 14:35:29.343772 env[1718]: time="2024-12-13T14:35:29.343725590Z" level=info msg="RemoveContainer for \"37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e\" returns successfully" Dec 13 14:35:29.415692 kubelet[2073]: E1213 14:35:29.415506 2073 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dedea27a-6ad2-4886-9793-efdbf9cb42ac" containerName="mount-cgroup" Dec 13 14:35:29.415940 kubelet[2073]: I1213 14:35:29.415717 2073 memory_manager.go:354] "RemoveStaleState removing state" podUID="dedea27a-6ad2-4886-9793-efdbf9cb42ac" containerName="mount-cgroup" Dec 13 14:35:29.441355 systemd[1]: Created slice kubepods-burstable-podb8291774_0e7a_4913_bb8c_20918e503b87.slice. Dec 13 14:35:29.506649 kubelet[2073]: I1213 14:35:29.506601 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b8291774-0e7a-4913-bb8c-20918e503b87-cilium-run\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.506649 kubelet[2073]: I1213 14:35:29.506653 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b8291774-0e7a-4913-bb8c-20918e503b87-bpf-maps\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.506939 kubelet[2073]: I1213 14:35:29.506686 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b8291774-0e7a-4913-bb8c-20918e503b87-cilium-cgroup\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.506939 kubelet[2073]: I1213 14:35:29.506709 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b8291774-0e7a-4913-bb8c-20918e503b87-cni-path\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.506939 kubelet[2073]: I1213 14:35:29.506728 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b8291774-0e7a-4913-bb8c-20918e503b87-cilium-config-path\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.506939 kubelet[2073]: I1213 14:35:29.506749 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jnwk6\" (UniqueName: \"kubernetes.io/projected/b8291774-0e7a-4913-bb8c-20918e503b87-kube-api-access-jnwk6\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.506939 kubelet[2073]: I1213 14:35:29.506775 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b8291774-0e7a-4913-bb8c-20918e503b87-lib-modules\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.506939 kubelet[2073]: I1213 14:35:29.506795 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b8291774-0e7a-4913-bb8c-20918e503b87-clustermesh-secrets\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.506939 kubelet[2073]: I1213 14:35:29.506820 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b8291774-0e7a-4913-bb8c-20918e503b87-cilium-ipsec-secrets\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.506939 kubelet[2073]: I1213 14:35:29.506845 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b8291774-0e7a-4913-bb8c-20918e503b87-host-proc-sys-kernel\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.506939 kubelet[2073]: I1213 14:35:29.506888 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b8291774-0e7a-4913-bb8c-20918e503b87-hubble-tls\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.506939 kubelet[2073]: I1213 14:35:29.506914 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b8291774-0e7a-4913-bb8c-20918e503b87-hostproc\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.506939 kubelet[2073]: I1213 14:35:29.506937 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b8291774-0e7a-4913-bb8c-20918e503b87-etc-cni-netd\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.507392 kubelet[2073]: I1213 14:35:29.506959 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b8291774-0e7a-4913-bb8c-20918e503b87-xtables-lock\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.507392 kubelet[2073]: I1213 14:35:29.506987 2073 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b8291774-0e7a-4913-bb8c-20918e503b87-host-proc-sys-net\") pod \"cilium-zcdv5\" (UID: \"b8291774-0e7a-4913-bb8c-20918e503b87\") " pod="kube-system/cilium-zcdv5" Dec 13 14:35:29.582568 kubelet[2073]: E1213 14:35:29.582359 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:29.754503 env[1718]: time="2024-12-13T14:35:29.754318330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zcdv5,Uid:b8291774-0e7a-4913-bb8c-20918e503b87,Namespace:kube-system,Attempt:0,}" Dec 13 14:35:29.802183 env[1718]: time="2024-12-13T14:35:29.800794635Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:35:29.802397 env[1718]: time="2024-12-13T14:35:29.802206104Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:35:29.802397 env[1718]: time="2024-12-13T14:35:29.802241402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:35:29.802644 env[1718]: time="2024-12-13T14:35:29.802587493Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/203e23e492dd37cf6ecc8dc355d4ed283358a79aff94c0819b1e2dd3f56540e0 pid=4019 runtime=io.containerd.runc.v2 Dec 13 14:35:29.821041 systemd[1]: Started cri-containerd-203e23e492dd37cf6ecc8dc355d4ed283358a79aff94c0819b1e2dd3f56540e0.scope. Dec 13 14:35:29.852265 env[1718]: time="2024-12-13T14:35:29.851853531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zcdv5,Uid:b8291774-0e7a-4913-bb8c-20918e503b87,Namespace:kube-system,Attempt:0,} returns sandbox id \"203e23e492dd37cf6ecc8dc355d4ed283358a79aff94c0819b1e2dd3f56540e0\"" Dec 13 14:35:29.863703 env[1718]: time="2024-12-13T14:35:29.863613832Z" level=info msg="CreateContainer within sandbox \"203e23e492dd37cf6ecc8dc355d4ed283358a79aff94c0819b1e2dd3f56540e0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:35:29.892196 env[1718]: time="2024-12-13T14:35:29.892137109Z" level=info msg="CreateContainer within sandbox \"203e23e492dd37cf6ecc8dc355d4ed283358a79aff94c0819b1e2dd3f56540e0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1a7293a6ca2aebc7262731ada4435696bc13d0fb654ec92159f5a06baec9f4db\"" Dec 13 14:35:29.893200 env[1718]: time="2024-12-13T14:35:29.893164012Z" level=info msg="StartContainer for \"1a7293a6ca2aebc7262731ada4435696bc13d0fb654ec92159f5a06baec9f4db\"" Dec 13 14:35:29.938808 systemd[1]: Started cri-containerd-1a7293a6ca2aebc7262731ada4435696bc13d0fb654ec92159f5a06baec9f4db.scope. Dec 13 14:35:30.008566 env[1718]: time="2024-12-13T14:35:30.008304855Z" level=info msg="StartContainer for \"1a7293a6ca2aebc7262731ada4435696bc13d0fb654ec92159f5a06baec9f4db\" returns successfully" Dec 13 14:35:30.055151 systemd[1]: cri-containerd-1a7293a6ca2aebc7262731ada4435696bc13d0fb654ec92159f5a06baec9f4db.scope: Deactivated successfully. Dec 13 14:35:30.140522 env[1718]: time="2024-12-13T14:35:30.139785662Z" level=info msg="shim disconnected" id=1a7293a6ca2aebc7262731ada4435696bc13d0fb654ec92159f5a06baec9f4db Dec 13 14:35:30.140522 env[1718]: time="2024-12-13T14:35:30.139850490Z" level=warning msg="cleaning up after shim disconnected" id=1a7293a6ca2aebc7262731ada4435696bc13d0fb654ec92159f5a06baec9f4db namespace=k8s.io Dec 13 14:35:30.140522 env[1718]: time="2024-12-13T14:35:30.139864526Z" level=info msg="cleaning up dead shim" Dec 13 14:35:30.151842 env[1718]: time="2024-12-13T14:35:30.151781864Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4103 runtime=io.containerd.runc.v2\n" Dec 13 14:35:30.358010 env[1718]: time="2024-12-13T14:35:30.357931389Z" level=info msg="CreateContainer within sandbox \"203e23e492dd37cf6ecc8dc355d4ed283358a79aff94c0819b1e2dd3f56540e0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:35:30.394127 env[1718]: time="2024-12-13T14:35:30.393592096Z" level=info msg="CreateContainer within sandbox \"203e23e492dd37cf6ecc8dc355d4ed283358a79aff94c0819b1e2dd3f56540e0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f9d0722066280aac0f34b9bffdb9dfd1fe586e25a9a8a0de8251649a8613dbcb\"" Dec 13 14:35:30.395508 env[1718]: time="2024-12-13T14:35:30.395456010Z" level=info msg="StartContainer for \"f9d0722066280aac0f34b9bffdb9dfd1fe586e25a9a8a0de8251649a8613dbcb\"" Dec 13 14:35:30.468210 systemd[1]: Started cri-containerd-f9d0722066280aac0f34b9bffdb9dfd1fe586e25a9a8a0de8251649a8613dbcb.scope. Dec 13 14:35:30.570299 env[1718]: time="2024-12-13T14:35:30.569914553Z" level=info msg="StartContainer for \"f9d0722066280aac0f34b9bffdb9dfd1fe586e25a9a8a0de8251649a8613dbcb\" returns successfully" Dec 13 14:35:30.570567 systemd[1]: cri-containerd-f9d0722066280aac0f34b9bffdb9dfd1fe586e25a9a8a0de8251649a8613dbcb.scope: Deactivated successfully. Dec 13 14:35:30.583023 kubelet[2073]: E1213 14:35:30.582792 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:30.678343 env[1718]: time="2024-12-13T14:35:30.678182460Z" level=info msg="shim disconnected" id=f9d0722066280aac0f34b9bffdb9dfd1fe586e25a9a8a0de8251649a8613dbcb Dec 13 14:35:30.678343 env[1718]: time="2024-12-13T14:35:30.678242609Z" level=warning msg="cleaning up after shim disconnected" id=f9d0722066280aac0f34b9bffdb9dfd1fe586e25a9a8a0de8251649a8613dbcb namespace=k8s.io Dec 13 14:35:30.678343 env[1718]: time="2024-12-13T14:35:30.678255173Z" level=info msg="cleaning up dead shim" Dec 13 14:35:30.691112 env[1718]: time="2024-12-13T14:35:30.691054236Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4166 runtime=io.containerd.runc.v2\n" Dec 13 14:35:30.701254 kubelet[2073]: I1213 14:35:30.699621 2073 setters.go:600] "Node became not ready" node="172.31.23.152" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:35:30Z","lastTransitionTime":"2024-12-13T14:35:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:35:30.813996 kubelet[2073]: I1213 14:35:30.813234 2073 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dedea27a-6ad2-4886-9793-efdbf9cb42ac" path="/var/lib/kubelet/pods/dedea27a-6ad2-4886-9793-efdbf9cb42ac/volumes" Dec 13 14:35:30.940742 kubelet[2073]: W1213 14:35:30.938427 2073 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddedea27a_6ad2_4886_9793_efdbf9cb42ac.slice/cri-containerd-37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e.scope WatchSource:0}: container "37f6ca9f3c4ee3c43f793f11e31690a75a5213daa18195cdec7f1d535151e70e" in namespace "k8s.io": not found Dec 13 14:35:31.312467 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f9d0722066280aac0f34b9bffdb9dfd1fe586e25a9a8a0de8251649a8613dbcb-rootfs.mount: Deactivated successfully. Dec 13 14:35:31.368287 env[1718]: time="2024-12-13T14:35:31.368223621Z" level=info msg="CreateContainer within sandbox \"203e23e492dd37cf6ecc8dc355d4ed283358a79aff94c0819b1e2dd3f56540e0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:35:31.396742 env[1718]: time="2024-12-13T14:35:31.396680770Z" level=info msg="CreateContainer within sandbox \"203e23e492dd37cf6ecc8dc355d4ed283358a79aff94c0819b1e2dd3f56540e0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f3549add4cd8a31b7779dd12cc4d0aab7f1f26166bc623c07830d1e399c9a90e\"" Dec 13 14:35:31.398255 env[1718]: time="2024-12-13T14:35:31.398216695Z" level=info msg="StartContainer for \"f3549add4cd8a31b7779dd12cc4d0aab7f1f26166bc623c07830d1e399c9a90e\"" Dec 13 14:35:31.448016 systemd[1]: Started cri-containerd-f3549add4cd8a31b7779dd12cc4d0aab7f1f26166bc623c07830d1e399c9a90e.scope. Dec 13 14:35:31.530629 env[1718]: time="2024-12-13T14:35:31.530580082Z" level=info msg="StartContainer for \"f3549add4cd8a31b7779dd12cc4d0aab7f1f26166bc623c07830d1e399c9a90e\" returns successfully" Dec 13 14:35:31.551277 systemd[1]: cri-containerd-f3549add4cd8a31b7779dd12cc4d0aab7f1f26166bc623c07830d1e399c9a90e.scope: Deactivated successfully. Dec 13 14:35:31.583254 kubelet[2073]: E1213 14:35:31.583130 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:31.686349 env[1718]: time="2024-12-13T14:35:31.686295463Z" level=info msg="shim disconnected" id=f3549add4cd8a31b7779dd12cc4d0aab7f1f26166bc623c07830d1e399c9a90e Dec 13 14:35:31.686625 env[1718]: time="2024-12-13T14:35:31.686594827Z" level=warning msg="cleaning up after shim disconnected" id=f3549add4cd8a31b7779dd12cc4d0aab7f1f26166bc623c07830d1e399c9a90e namespace=k8s.io Dec 13 14:35:31.686625 env[1718]: time="2024-12-13T14:35:31.686618975Z" level=info msg="cleaning up dead shim" Dec 13 14:35:31.687993 env[1718]: time="2024-12-13T14:35:31.687953815Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:31.700009 env[1718]: time="2024-12-13T14:35:31.699951802Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:31.703947 env[1718]: time="2024-12-13T14:35:31.703785035Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:35:31.704261 env[1718]: time="2024-12-13T14:35:31.704222281Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:35:31.708541 env[1718]: time="2024-12-13T14:35:31.708489303Z" level=info msg="CreateContainer within sandbox \"eda6532765a05ace10a0799b8068b2862e8eb1db72dfebf53ec23f2ca14ef07c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:35:31.713052 env[1718]: time="2024-12-13T14:35:31.712976426Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4225 runtime=io.containerd.runc.v2\n" Dec 13 14:35:31.732380 env[1718]: time="2024-12-13T14:35:31.732320098Z" level=info msg="CreateContainer within sandbox \"eda6532765a05ace10a0799b8068b2862e8eb1db72dfebf53ec23f2ca14ef07c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4602b5c372e84b3290013d690f4a93b3dabf72fc503413a786f9a409730f31c9\"" Dec 13 14:35:31.733304 env[1718]: time="2024-12-13T14:35:31.733267372Z" level=info msg="StartContainer for \"4602b5c372e84b3290013d690f4a93b3dabf72fc503413a786f9a409730f31c9\"" Dec 13 14:35:31.758010 systemd[1]: Started cri-containerd-4602b5c372e84b3290013d690f4a93b3dabf72fc503413a786f9a409730f31c9.scope. Dec 13 14:35:31.814664 env[1718]: time="2024-12-13T14:35:31.814600997Z" level=info msg="StartContainer for \"4602b5c372e84b3290013d690f4a93b3dabf72fc503413a786f9a409730f31c9\" returns successfully" Dec 13 14:35:32.312460 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3549add4cd8a31b7779dd12cc4d0aab7f1f26166bc623c07830d1e399c9a90e-rootfs.mount: Deactivated successfully. Dec 13 14:35:32.386584 kubelet[2073]: I1213 14:35:32.386515 2073 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-9xpb2" podStartSLOduration=2.5096722959999997 podStartE2EDuration="6.386490873s" podCreationTimestamp="2024-12-13 14:35:26 +0000 UTC" firstStartedPulling="2024-12-13 14:35:27.829331961 +0000 UTC m=+80.188561552" lastFinishedPulling="2024-12-13 14:35:31.706150541 +0000 UTC m=+84.065380129" observedRunningTime="2024-12-13 14:35:32.386066732 +0000 UTC m=+84.745296340" watchObservedRunningTime="2024-12-13 14:35:32.386490873 +0000 UTC m=+84.745720479" Dec 13 14:35:32.389776 env[1718]: time="2024-12-13T14:35:32.388715130Z" level=info msg="CreateContainer within sandbox \"203e23e492dd37cf6ecc8dc355d4ed283358a79aff94c0819b1e2dd3f56540e0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:35:32.455596 env[1718]: time="2024-12-13T14:35:32.455528002Z" level=info msg="CreateContainer within sandbox \"203e23e492dd37cf6ecc8dc355d4ed283358a79aff94c0819b1e2dd3f56540e0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"82c3d3d87eb6d7d1b96b1e62e3d4995f194406eb109dc0a7a8cd02e8d468cbb7\"" Dec 13 14:35:32.456949 env[1718]: time="2024-12-13T14:35:32.456858888Z" level=info msg="StartContainer for \"82c3d3d87eb6d7d1b96b1e62e3d4995f194406eb109dc0a7a8cd02e8d468cbb7\"" Dec 13 14:35:32.502958 systemd[1]: Started cri-containerd-82c3d3d87eb6d7d1b96b1e62e3d4995f194406eb109dc0a7a8cd02e8d468cbb7.scope. Dec 13 14:35:32.569474 systemd[1]: cri-containerd-82c3d3d87eb6d7d1b96b1e62e3d4995f194406eb109dc0a7a8cd02e8d468cbb7.scope: Deactivated successfully. Dec 13 14:35:32.571268 env[1718]: time="2024-12-13T14:35:32.571223580Z" level=info msg="StartContainer for \"82c3d3d87eb6d7d1b96b1e62e3d4995f194406eb109dc0a7a8cd02e8d468cbb7\" returns successfully" Dec 13 14:35:32.583619 kubelet[2073]: E1213 14:35:32.583540 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:32.612565 env[1718]: time="2024-12-13T14:35:32.612507141Z" level=info msg="shim disconnected" id=82c3d3d87eb6d7d1b96b1e62e3d4995f194406eb109dc0a7a8cd02e8d468cbb7 Dec 13 14:35:32.612565 env[1718]: time="2024-12-13T14:35:32.612560867Z" level=warning msg="cleaning up after shim disconnected" id=82c3d3d87eb6d7d1b96b1e62e3d4995f194406eb109dc0a7a8cd02e8d468cbb7 namespace=k8s.io Dec 13 14:35:32.612565 env[1718]: time="2024-12-13T14:35:32.612574238Z" level=info msg="cleaning up dead shim" Dec 13 14:35:32.622590 env[1718]: time="2024-12-13T14:35:32.622528738Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4320 runtime=io.containerd.runc.v2\n" Dec 13 14:35:33.312085 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82c3d3d87eb6d7d1b96b1e62e3d4995f194406eb109dc0a7a8cd02e8d468cbb7-rootfs.mount: Deactivated successfully. Dec 13 14:35:33.406785 env[1718]: time="2024-12-13T14:35:33.406723393Z" level=info msg="CreateContainer within sandbox \"203e23e492dd37cf6ecc8dc355d4ed283358a79aff94c0819b1e2dd3f56540e0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:35:33.471847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1736902620.mount: Deactivated successfully. Dec 13 14:35:33.490055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687400483.mount: Deactivated successfully. Dec 13 14:35:33.495246 env[1718]: time="2024-12-13T14:35:33.495184183Z" level=info msg="CreateContainer within sandbox \"203e23e492dd37cf6ecc8dc355d4ed283358a79aff94c0819b1e2dd3f56540e0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ada9a81283a60c89b47562edd5826709e2094a1f0be9c62df314b159229fc4bd\"" Dec 13 14:35:33.496483 env[1718]: time="2024-12-13T14:35:33.496440561Z" level=info msg="StartContainer for \"ada9a81283a60c89b47562edd5826709e2094a1f0be9c62df314b159229fc4bd\"" Dec 13 14:35:33.529419 systemd[1]: Started cri-containerd-ada9a81283a60c89b47562edd5826709e2094a1f0be9c62df314b159229fc4bd.scope. Dec 13 14:35:33.585949 kubelet[2073]: E1213 14:35:33.585066 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:33.590044 env[1718]: time="2024-12-13T14:35:33.589953200Z" level=info msg="StartContainer for \"ada9a81283a60c89b47562edd5826709e2094a1f0be9c62df314b159229fc4bd\" returns successfully" Dec 13 14:35:33.726252 kubelet[2073]: E1213 14:35:33.726165 2073 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:35:34.071817 kubelet[2073]: W1213 14:35:34.067038 2073 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8291774_0e7a_4913_bb8c_20918e503b87.slice/cri-containerd-1a7293a6ca2aebc7262731ada4435696bc13d0fb654ec92159f5a06baec9f4db.scope WatchSource:0}: task 1a7293a6ca2aebc7262731ada4435696bc13d0fb654ec92159f5a06baec9f4db not found: not found Dec 13 14:35:34.423812 kubelet[2073]: I1213 14:35:34.423612 2073 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zcdv5" podStartSLOduration=5.423571424 podStartE2EDuration="5.423571424s" podCreationTimestamp="2024-12-13 14:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:35:34.422460585 +0000 UTC m=+86.781690213" watchObservedRunningTime="2024-12-13 14:35:34.423571424 +0000 UTC m=+86.782801030" Dec 13 14:35:34.544938 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:35:34.585890 kubelet[2073]: E1213 14:35:34.585809 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:35.586487 kubelet[2073]: E1213 14:35:35.586412 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:36.587493 kubelet[2073]: E1213 14:35:36.587447 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:37.193492 kubelet[2073]: W1213 14:35:37.193444 2073 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8291774_0e7a_4913_bb8c_20918e503b87.slice/cri-containerd-f9d0722066280aac0f34b9bffdb9dfd1fe586e25a9a8a0de8251649a8613dbcb.scope WatchSource:0}: task f9d0722066280aac0f34b9bffdb9dfd1fe586e25a9a8a0de8251649a8613dbcb not found: not found Dec 13 14:35:37.588912 kubelet[2073]: E1213 14:35:37.588859 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:38.229850 (udev-worker)[4886]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:35:38.233241 (udev-worker)[4889]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:35:38.242726 systemd-networkd[1453]: lxc_health: Link UP Dec 13 14:35:38.280549 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:35:38.280904 systemd-networkd[1453]: lxc_health: Gained carrier Dec 13 14:35:38.590251 kubelet[2073]: E1213 14:35:38.590187 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:38.921315 systemd[1]: run-containerd-runc-k8s.io-ada9a81283a60c89b47562edd5826709e2094a1f0be9c62df314b159229fc4bd-runc.2bYlD9.mount: Deactivated successfully. Dec 13 14:35:39.594154 kubelet[2073]: E1213 14:35:39.594104 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:39.710182 systemd-networkd[1453]: lxc_health: Gained IPv6LL Dec 13 14:35:40.305416 kubelet[2073]: W1213 14:35:40.305365 2073 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8291774_0e7a_4913_bb8c_20918e503b87.slice/cri-containerd-f3549add4cd8a31b7779dd12cc4d0aab7f1f26166bc623c07830d1e399c9a90e.scope WatchSource:0}: task f3549add4cd8a31b7779dd12cc4d0aab7f1f26166bc623c07830d1e399c9a90e not found: not found Dec 13 14:35:40.595336 kubelet[2073]: E1213 14:35:40.595192 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:41.315027 systemd[1]: run-containerd-runc-k8s.io-ada9a81283a60c89b47562edd5826709e2094a1f0be9c62df314b159229fc4bd-runc.07atwv.mount: Deactivated successfully. Dec 13 14:35:41.597018 kubelet[2073]: E1213 14:35:41.596842 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:42.597884 kubelet[2073]: E1213 14:35:42.597763 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:43.424586 kubelet[2073]: W1213 14:35:43.424534 2073 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8291774_0e7a_4913_bb8c_20918e503b87.slice/cri-containerd-82c3d3d87eb6d7d1b96b1e62e3d4995f194406eb109dc0a7a8cd02e8d468cbb7.scope WatchSource:0}: task 82c3d3d87eb6d7d1b96b1e62e3d4995f194406eb109dc0a7a8cd02e8d468cbb7 not found: not found Dec 13 14:35:43.564245 systemd[1]: run-containerd-runc-k8s.io-ada9a81283a60c89b47562edd5826709e2094a1f0be9c62df314b159229fc4bd-runc.32wPkt.mount: Deactivated successfully. Dec 13 14:35:43.599317 kubelet[2073]: E1213 14:35:43.599236 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:44.600035 kubelet[2073]: E1213 14:35:44.599949 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:45.600157 kubelet[2073]: E1213 14:35:45.600094 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:45.788808 systemd[1]: run-containerd-runc-k8s.io-ada9a81283a60c89b47562edd5826709e2094a1f0be9c62df314b159229fc4bd-runc.6Ctos4.mount: Deactivated successfully. Dec 13 14:35:46.601020 kubelet[2073]: E1213 14:35:46.600958 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:47.602332 kubelet[2073]: E1213 14:35:47.602268 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:48.483255 kubelet[2073]: E1213 14:35:48.483198 2073 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 14:35:48.602479 kubelet[2073]: E1213 14:35:48.602404 2073 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"