Dec 13 02:21:01.102596 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:21:01.102632 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:21:01.102646 kernel: BIOS-provided physical RAM map: Dec 13 02:21:01.102656 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 02:21:01.102666 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 02:21:01.102676 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 02:21:01.102691 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 02:21:01.102703 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 02:21:01.102714 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 02:21:01.102725 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 02:21:01.102737 kernel: NX (Execute Disable) protection: active Dec 13 02:21:01.102746 kernel: SMBIOS 2.7 present. Dec 13 02:21:01.102758 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 02:21:01.102768 kernel: Hypervisor detected: KVM Dec 13 02:21:01.102784 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 02:21:01.102795 kernel: kvm-clock: cpu 0, msr 6519b001, primary cpu clock Dec 13 02:21:01.102806 kernel: kvm-clock: using sched offset of 7869313534 cycles Dec 13 02:21:01.102819 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 02:21:01.102831 kernel: tsc: Detected 2499.996 MHz processor Dec 13 02:21:01.102845 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:21:01.102861 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:21:01.102873 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 02:21:01.102886 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:21:01.102899 kernel: Using GB pages for direct mapping Dec 13 02:21:01.102911 kernel: ACPI: Early table checksum verification disabled Dec 13 02:21:01.102925 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 02:21:01.102938 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 02:21:01.102951 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 02:21:01.102965 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 02:21:01.102980 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 02:21:01.102994 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:21:01.103007 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 02:21:01.103020 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 02:21:01.103033 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 02:21:01.103045 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 02:21:01.103058 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 02:21:01.103071 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 02:21:01.103088 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 02:21:01.103101 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 02:21:01.103115 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 02:21:01.103133 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 02:21:01.103147 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 02:21:01.103161 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 02:21:01.103175 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 02:21:01.103192 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 02:21:01.103206 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 02:21:01.103220 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 02:21:01.103234 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 02:21:01.103248 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 02:21:01.103262 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 02:21:01.103274 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 02:21:01.103287 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 02:21:01.103303 kernel: Zone ranges: Dec 13 02:21:01.103318 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:21:01.103331 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 02:21:01.103345 kernel: Normal empty Dec 13 02:21:01.103359 kernel: Movable zone start for each node Dec 13 02:21:01.103373 kernel: Early memory node ranges Dec 13 02:21:01.103387 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 02:21:01.103401 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 02:21:01.103414 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 02:21:01.103431 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:21:01.103445 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 02:21:01.103459 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 02:21:01.103471 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 02:21:01.103484 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 02:21:01.103498 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 02:21:01.103512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 02:21:01.103527 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:21:01.103559 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 02:21:01.103577 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 02:21:01.103673 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:21:01.103690 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:21:01.103704 kernel: TSC deadline timer available Dec 13 02:21:01.103717 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 02:21:01.103732 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 02:21:01.103745 kernel: Booting paravirtualized kernel on KVM Dec 13 02:21:01.103760 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:21:01.103774 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 02:21:01.103792 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 02:21:01.103813 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 02:21:01.103827 kernel: pcpu-alloc: [0] 0 1 Dec 13 02:21:01.103840 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 02:21:01.103854 kernel: kvm-guest: PV spinlocks enabled Dec 13 02:21:01.103868 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 02:21:01.103883 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 02:21:01.103897 kernel: Policy zone: DMA32 Dec 13 02:21:01.103913 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:21:01.103930 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:21:01.103944 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:21:01.103959 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 02:21:01.103973 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:21:01.103987 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 123080K reserved, 0K cma-reserved) Dec 13 02:21:01.104001 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 02:21:01.104014 kernel: Kernel/User page tables isolation: enabled Dec 13 02:21:01.104028 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:21:01.104045 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:21:01.104059 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:21:01.104074 kernel: rcu: RCU event tracing is enabled. Dec 13 02:21:01.104088 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 02:21:01.104102 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:21:01.104117 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:21:01.104131 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:21:01.104187 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 02:21:01.104200 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 02:21:01.104217 kernel: random: crng init done Dec 13 02:21:01.104314 kernel: Console: colour VGA+ 80x25 Dec 13 02:21:01.104334 kernel: printk: console [ttyS0] enabled Dec 13 02:21:01.104349 kernel: ACPI: Core revision 20210730 Dec 13 02:21:01.104363 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 02:21:01.104376 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:21:01.104388 kernel: x2apic enabled Dec 13 02:21:01.104402 kernel: Switched APIC routing to physical x2apic. Dec 13 02:21:01.104416 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 02:21:01.104433 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 13 02:21:01.104448 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 02:21:01.104462 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 02:21:01.104477 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:21:01.104501 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 02:21:01.104518 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:21:01.104533 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 02:21:01.104603 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 02:21:01.104682 kernel: RETBleed: Vulnerable Dec 13 02:21:01.104696 kernel: Speculative Store Bypass: Vulnerable Dec 13 02:21:01.104932 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:21:01.104945 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 02:21:01.104960 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 02:21:01.104973 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:21:01.104992 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:21:01.105007 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:21:01.105155 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 02:21:01.105172 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 02:21:01.105188 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 02:21:01.105206 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 02:21:01.105220 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 02:21:01.105234 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 02:21:01.105248 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:21:01.105263 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 02:21:01.105277 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 02:21:01.105291 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 02:21:01.105306 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 02:21:01.105320 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 02:21:01.105335 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 02:21:01.105349 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 02:21:01.105363 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:21:01.105380 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:21:01.105394 kernel: LSM: Security Framework initializing Dec 13 02:21:01.105409 kernel: SELinux: Initializing. Dec 13 02:21:01.105423 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:21:01.105438 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 02:21:01.105452 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 02:21:01.105467 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 02:21:01.105573 kernel: signal: max sigframe size: 3632 Dec 13 02:21:01.105589 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:21:01.105602 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 02:21:01.105617 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:21:01.105629 kernel: x86: Booting SMP configuration: Dec 13 02:21:01.105642 kernel: .... node #0, CPUs: #1 Dec 13 02:21:01.105654 kernel: kvm-clock: cpu 1, msr 6519b041, secondary cpu clock Dec 13 02:21:01.105667 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 02:21:01.105681 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 02:21:01.105695 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:21:01.105744 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 02:21:01.105759 kernel: smpboot: Max logical packages: 1 Dec 13 02:21:01.105776 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 13 02:21:01.105790 kernel: devtmpfs: initialized Dec 13 02:21:01.105802 kernel: x86/mm: Memory block size: 128MB Dec 13 02:21:01.105816 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:21:01.106030 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 02:21:01.106045 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:21:01.106058 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:21:01.106071 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:21:01.106084 kernel: audit: type=2000 audit(1734056459.329:1): state=initialized audit_enabled=0 res=1 Dec 13 02:21:01.106100 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:21:01.106112 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:21:01.106125 kernel: cpuidle: using governor menu Dec 13 02:21:01.106170 kernel: ACPI: bus type PCI registered Dec 13 02:21:01.106257 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:21:01.106274 kernel: dca service started, version 1.12.1 Dec 13 02:21:01.106286 kernel: PCI: Using configuration type 1 for base access Dec 13 02:21:01.106300 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:21:01.106362 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:21:01.106382 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:21:01.106459 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:21:01.106474 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:21:01.106487 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:21:01.106500 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:21:01.106513 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:21:01.106526 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:21:01.106558 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:21:01.106572 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 02:21:01.106589 kernel: ACPI: Interpreter enabled Dec 13 02:21:01.106604 kernel: ACPI: PM: (supports S0 S5) Dec 13 02:21:01.106618 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:21:01.106632 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:21:01.106647 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 02:21:01.106735 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 02:21:01.107015 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:21:01.107146 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 02:21:01.107169 kernel: acpiphp: Slot [3] registered Dec 13 02:21:01.107184 kernel: acpiphp: Slot [4] registered Dec 13 02:21:01.107199 kernel: acpiphp: Slot [5] registered Dec 13 02:21:01.107214 kernel: acpiphp: Slot [6] registered Dec 13 02:21:01.107228 kernel: acpiphp: Slot [7] registered Dec 13 02:21:01.107243 kernel: acpiphp: Slot [8] registered Dec 13 02:21:01.107257 kernel: acpiphp: Slot [9] registered Dec 13 02:21:01.107272 kernel: acpiphp: Slot [10] registered Dec 13 02:21:01.107286 kernel: acpiphp: Slot [11] registered Dec 13 02:21:01.107411 kernel: acpiphp: Slot [12] registered Dec 13 02:21:01.107426 kernel: acpiphp: Slot [13] registered Dec 13 02:21:01.107441 kernel: acpiphp: Slot [14] registered Dec 13 02:21:01.107455 kernel: acpiphp: Slot [15] registered Dec 13 02:21:01.107470 kernel: acpiphp: Slot [16] registered Dec 13 02:21:01.107484 kernel: acpiphp: Slot [17] registered Dec 13 02:21:01.107499 kernel: acpiphp: Slot [18] registered Dec 13 02:21:01.107514 kernel: acpiphp: Slot [19] registered Dec 13 02:21:01.107528 kernel: acpiphp: Slot [20] registered Dec 13 02:21:01.107565 kernel: acpiphp: Slot [21] registered Dec 13 02:21:01.107577 kernel: acpiphp: Slot [22] registered Dec 13 02:21:01.107588 kernel: acpiphp: Slot [23] registered Dec 13 02:21:01.107600 kernel: acpiphp: Slot [24] registered Dec 13 02:21:01.107612 kernel: acpiphp: Slot [25] registered Dec 13 02:21:01.107625 kernel: acpiphp: Slot [26] registered Dec 13 02:21:01.107637 kernel: acpiphp: Slot [27] registered Dec 13 02:21:01.107651 kernel: acpiphp: Slot [28] registered Dec 13 02:21:01.107663 kernel: acpiphp: Slot [29] registered Dec 13 02:21:01.107674 kernel: acpiphp: Slot [30] registered Dec 13 02:21:01.107690 kernel: acpiphp: Slot [31] registered Dec 13 02:21:01.107702 kernel: PCI host bridge to bus 0000:00 Dec 13 02:21:01.107857 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:21:01.107963 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:21:01.108119 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:21:01.108230 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 02:21:01.108372 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 02:21:01.108751 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 02:21:01.108887 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 02:21:01.109011 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 02:21:01.109124 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 02:21:01.109233 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 02:21:01.109343 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 02:21:01.109556 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 02:21:01.109681 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 02:21:01.109794 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 02:21:01.109976 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 02:21:01.110091 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 02:21:01.110213 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 02:21:01.110393 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 02:21:01.110600 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 02:21:01.110726 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:21:01.110858 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 02:21:01.112143 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 02:21:01.112304 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 02:21:01.112431 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 02:21:01.112452 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 02:21:01.112471 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 02:21:01.112486 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 02:21:01.112501 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 02:21:01.112515 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 02:21:01.112530 kernel: iommu: Default domain type: Translated Dec 13 02:21:01.112563 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:21:01.112773 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 02:21:01.112905 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:21:01.113249 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 02:21:01.113279 kernel: vgaarb: loaded Dec 13 02:21:01.113295 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:21:01.113311 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:21:01.113323 kernel: PTP clock support registered Dec 13 02:21:01.113338 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:21:01.113352 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:21:01.113367 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 02:21:01.113382 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 02:21:01.113399 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 02:21:01.113414 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 02:21:01.113429 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 02:21:01.113444 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:21:01.113459 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:21:01.113473 kernel: pnp: PnP ACPI init Dec 13 02:21:01.113488 kernel: pnp: PnP ACPI: found 5 devices Dec 13 02:21:01.113503 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:21:01.113518 kernel: NET: Registered PF_INET protocol family Dec 13 02:21:01.113535 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 02:21:01.113590 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 02:21:01.113605 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:21:01.113620 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 02:21:01.113635 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 02:21:01.113650 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 02:21:01.113665 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:21:01.113680 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 02:21:01.113695 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:21:01.113713 kernel: NET: Registered PF_XDP protocol family Dec 13 02:21:01.113915 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:21:01.114089 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:21:01.114204 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:21:01.114381 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 02:21:01.114593 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 02:21:01.114883 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 02:21:01.114910 kernel: PCI: CLS 0 bytes, default 64 Dec 13 02:21:01.114927 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 02:21:01.114979 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 02:21:01.114994 kernel: clocksource: Switched to clocksource tsc Dec 13 02:21:01.115009 kernel: Initialise system trusted keyrings Dec 13 02:21:01.115024 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 02:21:01.115319 kernel: Key type asymmetric registered Dec 13 02:21:01.115336 kernel: Asymmetric key parser 'x509' registered Dec 13 02:21:01.115351 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:21:01.115370 kernel: io scheduler mq-deadline registered Dec 13 02:21:01.115385 kernel: io scheduler kyber registered Dec 13 02:21:01.115398 kernel: io scheduler bfq registered Dec 13 02:21:01.115413 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:21:01.115428 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:21:01.115443 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:21:01.115458 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 02:21:01.115473 kernel: i8042: Warning: Keylock active Dec 13 02:21:01.115488 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 02:21:01.115506 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 02:21:01.115673 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 02:21:01.115871 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 02:21:01.115996 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T02:21:00 UTC (1734056460) Dec 13 02:21:01.116319 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 02:21:01.116344 kernel: intel_pstate: CPU model not supported Dec 13 02:21:01.116360 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:21:01.116375 kernel: Segment Routing with IPv6 Dec 13 02:21:01.116394 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:21:01.116409 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:21:01.116424 kernel: Key type dns_resolver registered Dec 13 02:21:01.116439 kernel: IPI shorthand broadcast: enabled Dec 13 02:21:01.116454 kernel: sched_clock: Marking stable (721024373, 302850077)->(1209853586, -185979136) Dec 13 02:21:01.116469 kernel: registered taskstats version 1 Dec 13 02:21:01.116484 kernel: Loading compiled-in X.509 certificates Dec 13 02:21:01.116499 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:21:01.116513 kernel: Key type .fscrypt registered Dec 13 02:21:01.116530 kernel: Key type fscrypt-provisioning registered Dec 13 02:21:01.116556 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 02:21:01.116569 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:21:01.116730 kernel: ima: No architecture policies found Dec 13 02:21:01.116744 kernel: clk: Disabling unused clocks Dec 13 02:21:01.116759 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:21:01.116772 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:21:01.116786 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:21:01.116800 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:21:01.116818 kernel: Run /init as init process Dec 13 02:21:01.116833 kernel: with arguments: Dec 13 02:21:01.116847 kernel: /init Dec 13 02:21:01.116860 kernel: with environment: Dec 13 02:21:01.116872 kernel: HOME=/ Dec 13 02:21:01.116885 kernel: TERM=linux Dec 13 02:21:01.116898 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:21:01.116915 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:21:01.116935 systemd[1]: Detected virtualization amazon. Dec 13 02:21:01.116951 systemd[1]: Detected architecture x86-64. Dec 13 02:21:01.116966 systemd[1]: Running in initrd. Dec 13 02:21:01.116982 systemd[1]: No hostname configured, using default hostname. Dec 13 02:21:01.117014 systemd[1]: Hostname set to . Dec 13 02:21:01.117035 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:21:01.117054 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:21:01.117070 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:21:01.117086 systemd[1]: Reached target cryptsetup.target. Dec 13 02:21:01.117103 systemd[1]: Reached target paths.target. Dec 13 02:21:01.117120 systemd[1]: Reached target slices.target. Dec 13 02:21:01.117135 systemd[1]: Reached target swap.target. Dec 13 02:21:01.117150 systemd[1]: Reached target timers.target. Dec 13 02:21:01.117168 systemd[1]: Listening on iscsid.socket. Dec 13 02:21:01.117184 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:21:01.117201 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:21:01.117218 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:21:01.117235 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:21:01.117252 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:21:01.117268 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:21:01.117285 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:21:01.117303 systemd[1]: Reached target sockets.target. Dec 13 02:21:01.117322 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 02:21:01.117339 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:21:01.117355 systemd[1]: Finished network-cleanup.service. Dec 13 02:21:01.117372 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:21:01.117392 systemd[1]: Starting systemd-journald.service... Dec 13 02:21:01.117409 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:21:01.117425 systemd[1]: Starting systemd-resolved.service... Dec 13 02:21:01.117441 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:21:01.117459 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:21:01.117478 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:21:01.117494 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:21:01.117511 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:21:01.117533 systemd-journald[185]: Journal started Dec 13 02:21:01.117685 systemd-journald[185]: Runtime Journal (/run/log/journal/ec214f889ba1a8a8b085b624f2426079) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:21:01.162153 systemd-resolved[187]: Positive Trust Anchors: Dec 13 02:21:01.326130 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:21:01.326273 kernel: Bridge firewalling registered Dec 13 02:21:01.326298 kernel: SCSI subsystem initialized Dec 13 02:21:01.326783 systemd[1]: Started systemd-journald.service. Dec 13 02:21:01.326850 kernel: audit: type=1130 audit(1734056461.301:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.326880 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:21:01.326928 kernel: audit: type=1130 audit(1734056461.309:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.326949 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:21:01.326966 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:21:01.327006 kernel: audit: type=1130 audit(1734056461.316:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.162513 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:21:01.162591 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:21:01.169624 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 02:21:01.175271 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 02:21:01.346031 kernel: audit: type=1130 audit(1734056461.325:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.234269 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 02:21:01.310989 systemd[1]: Started systemd-resolved.service. Dec 13 02:21:01.318226 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:21:01.327204 systemd[1]: Reached target nss-lookup.target. Dec 13 02:21:01.330918 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:21:01.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.345835 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 02:21:01.360258 kernel: audit: type=1130 audit(1734056461.346:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.346957 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:21:01.351920 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:21:01.373001 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:21:01.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.377641 kernel: audit: type=1130 audit(1734056461.371:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.384398 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:21:01.400884 kernel: audit: type=1130 audit(1734056461.384:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.393289 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:21:01.449381 dracut-cmdline[208]: dracut-dracut-053 Dec 13 02:21:01.460143 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:21:01.685574 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:21:01.729635 kernel: iscsi: registered transport (tcp) Dec 13 02:21:01.791084 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:21:01.791166 kernel: QLogic iSCSI HBA Driver Dec 13 02:21:01.902652 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:21:01.913574 kernel: audit: type=1130 audit(1734056461.902:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:01.906019 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:21:02.017608 kernel: raid6: avx512x4 gen() 10094 MB/s Dec 13 02:21:02.035646 kernel: raid6: avx512x4 xor() 4237 MB/s Dec 13 02:21:02.089595 kernel: raid6: avx512x2 gen() 8926 MB/s Dec 13 02:21:02.110599 kernel: raid6: avx512x2 xor() 6248 MB/s Dec 13 02:21:02.132611 kernel: raid6: avx512x1 gen() 3886 MB/s Dec 13 02:21:02.153794 kernel: raid6: avx512x1 xor() 2174 MB/s Dec 13 02:21:02.172815 kernel: raid6: avx2x4 gen() 5132 MB/s Dec 13 02:21:02.197593 kernel: raid6: avx2x4 xor() 2590 MB/s Dec 13 02:21:02.218236 kernel: raid6: avx2x2 gen() 2247 MB/s Dec 13 02:21:02.244595 kernel: raid6: avx2x2 xor() 4639 MB/s Dec 13 02:21:02.266888 kernel: raid6: avx2x1 gen() 5745 MB/s Dec 13 02:21:02.284193 kernel: raid6: avx2x1 xor() 2053 MB/s Dec 13 02:21:02.304589 kernel: raid6: sse2x4 gen() 1608 MB/s Dec 13 02:21:02.322605 kernel: raid6: sse2x4 xor() 1979 MB/s Dec 13 02:21:02.342089 kernel: raid6: sse2x2 gen() 6168 MB/s Dec 13 02:21:02.359942 kernel: raid6: sse2x2 xor() 1935 MB/s Dec 13 02:21:02.376724 kernel: raid6: sse2x1 gen() 1510 MB/s Dec 13 02:21:02.395861 kernel: raid6: sse2x1 xor() 1656 MB/s Dec 13 02:21:02.395939 kernel: raid6: using algorithm avx512x4 gen() 10094 MB/s Dec 13 02:21:02.395958 kernel: raid6: .... xor() 4237 MB/s, rmw enabled Dec 13 02:21:02.398087 kernel: raid6: using avx512x2 recovery algorithm Dec 13 02:21:02.514650 kernel: xor: automatically using best checksumming function avx Dec 13 02:21:02.837656 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:21:02.867859 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:21:02.900624 kernel: audit: type=1130 audit(1734056462.868:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:02.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:02.899000 audit: BPF prog-id=7 op=LOAD Dec 13 02:21:02.899000 audit: BPF prog-id=8 op=LOAD Dec 13 02:21:02.904140 systemd[1]: Starting systemd-udevd.service... Dec 13 02:21:02.992110 systemd-udevd[385]: Using default interface naming scheme 'v252'. Dec 13 02:21:03.019910 systemd[1]: Started systemd-udevd.service. Dec 13 02:21:03.020000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:03.025008 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:21:03.118256 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Dec 13 02:21:03.199419 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:21:03.201586 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:21:03.199000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:03.302965 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:21:03.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:03.373641 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 02:21:03.401953 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 02:21:03.402111 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 02:21:03.402242 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:21:03.402259 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:9b:de:a3:7a:69 Dec 13 02:21:03.402378 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 02:21:03.406561 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 02:21:03.412797 (udev-worker)[441]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:21:03.419899 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 02:21:03.431912 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:21:03.431997 kernel: GPT:9289727 != 16777215 Dec 13 02:21:03.432016 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:21:03.432035 kernel: GPT:9289727 != 16777215 Dec 13 02:21:03.432051 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:21:03.432076 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:21:03.432092 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:21:03.434348 kernel: AES CTR mode by8 optimization enabled Dec 13 02:21:03.529576 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (441) Dec 13 02:21:03.661242 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:21:03.676295 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:21:03.676427 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:21:03.699093 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:21:03.714950 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:21:03.736116 systemd[1]: Starting disk-uuid.service... Dec 13 02:21:03.756487 disk-uuid[595]: Primary Header is updated. Dec 13 02:21:03.756487 disk-uuid[595]: Secondary Entries is updated. Dec 13 02:21:03.756487 disk-uuid[595]: Secondary Header is updated. Dec 13 02:21:03.781566 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:21:03.797821 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:21:04.809379 disk-uuid[596]: The operation has completed successfully. Dec 13 02:21:04.817835 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 02:21:05.046766 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:21:05.046885 systemd[1]: Finished disk-uuid.service. Dec 13 02:21:05.072737 kernel: kauditd_printk_skb: 5 callbacks suppressed Dec 13 02:21:05.073428 kernel: audit: type=1130 audit(1734056465.047:16): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.073903 kernel: audit: type=1131 audit(1734056465.047:17): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.082448 systemd[1]: Starting verity-setup.service... Dec 13 02:21:05.110627 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:21:05.322484 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:21:05.329722 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:21:05.343751 systemd[1]: Finished verity-setup.service. Dec 13 02:21:05.360907 kernel: audit: type=1130 audit(1734056465.347:18): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.489579 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:21:05.490040 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:21:05.491654 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:21:05.492919 systemd[1]: Starting ignition-setup.service... Dec 13 02:21:05.526700 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:21:05.564133 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:21:05.564202 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:21:05.564220 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:21:05.574568 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:21:05.597862 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 02:21:05.627367 systemd[1]: Finished ignition-setup.service. Dec 13 02:21:05.668283 kernel: audit: type=1130 audit(1734056465.635:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.645797 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:21:05.766503 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:21:05.772814 kernel: audit: type=1130 audit(1734056465.765:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.772000 audit: BPF prog-id=9 op=LOAD Dec 13 02:21:05.774401 systemd[1]: Starting systemd-networkd.service... Dec 13 02:21:05.777605 kernel: audit: type=1334 audit(1734056465.772:21): prog-id=9 op=LOAD Dec 13 02:21:05.829983 systemd-networkd[1036]: lo: Link UP Dec 13 02:21:05.829996 systemd-networkd[1036]: lo: Gained carrier Dec 13 02:21:05.887552 kernel: audit: type=1130 audit(1734056465.832:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.887592 kernel: audit: type=1130 audit(1734056465.877:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.832367 systemd-networkd[1036]: Enumeration completed Dec 13 02:21:05.832740 systemd[1]: Started systemd-networkd.service. Dec 13 02:21:05.833888 systemd[1]: Reached target network.target. Dec 13 02:21:05.843907 systemd-networkd[1036]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:21:05.844125 systemd[1]: Starting iscsiuio.service... Dec 13 02:21:05.879001 systemd[1]: Started iscsiuio.service. Dec 13 02:21:05.906930 iscsid[1041]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:21:05.906930 iscsid[1041]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:21:05.906930 iscsid[1041]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:21:05.906930 iscsid[1041]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:21:05.906930 iscsid[1041]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:21:05.906930 iscsid[1041]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:21:05.937056 kernel: audit: type=1130 audit(1734056465.928:24): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.881915 systemd[1]: Starting iscsid.service... Dec 13 02:21:05.897600 systemd-networkd[1036]: eth0: Link UP Dec 13 02:21:05.897607 systemd-networkd[1036]: eth0: Gained carrier Dec 13 02:21:05.925131 systemd[1]: Started iscsid.service. Dec 13 02:21:05.931245 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:21:05.937972 systemd-networkd[1036]: eth0: DHCPv4 address 172.31.30.169/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:21:05.949666 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:21:05.954578 kernel: audit: type=1130 audit(1734056465.949:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:05.953810 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:21:05.956434 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:21:05.957429 systemd[1]: Reached target remote-fs.target. Dec 13 02:21:05.962262 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:21:05.973241 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:21:05.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:06.288034 ignition[982]: Ignition 2.14.0 Dec 13 02:21:06.288052 ignition[982]: Stage: fetch-offline Dec 13 02:21:06.288198 ignition[982]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:06.288247 ignition[982]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:06.318337 ignition[982]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:06.320927 ignition[982]: Ignition finished successfully Dec 13 02:21:06.323763 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:21:06.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:06.327614 systemd[1]: Starting ignition-fetch.service... Dec 13 02:21:06.342145 ignition[1060]: Ignition 2.14.0 Dec 13 02:21:06.342157 ignition[1060]: Stage: fetch Dec 13 02:21:06.342309 ignition[1060]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:06.342329 ignition[1060]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:06.359324 ignition[1060]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:06.363286 ignition[1060]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:06.387509 ignition[1060]: INFO : PUT result: OK Dec 13 02:21:06.390454 ignition[1060]: DEBUG : parsed url from cmdline: "" Dec 13 02:21:06.390454 ignition[1060]: INFO : no config URL provided Dec 13 02:21:06.390454 ignition[1060]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:21:06.395430 ignition[1060]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 02:21:06.395430 ignition[1060]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:06.395430 ignition[1060]: INFO : PUT result: OK Dec 13 02:21:06.395430 ignition[1060]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 02:21:06.400899 ignition[1060]: INFO : GET result: OK Dec 13 02:21:06.401839 ignition[1060]: DEBUG : parsing config with SHA512: 4ac0be39609951fb8b7301c17943e5c1cb3a3997646d9b3f5e45d1fcdb28c5760d8bbdb4ae1e1379ce8cea8cec24739e948cf82b2459919ad4a35fce1c54c842 Dec 13 02:21:06.412041 unknown[1060]: fetched base config from "system" Dec 13 02:21:06.412059 unknown[1060]: fetched base config from "system" Dec 13 02:21:06.412069 unknown[1060]: fetched user config from "aws" Dec 13 02:21:06.415947 ignition[1060]: fetch: fetch complete Dec 13 02:21:06.415960 ignition[1060]: fetch: fetch passed Dec 13 02:21:06.416028 ignition[1060]: Ignition finished successfully Dec 13 02:21:06.420269 systemd[1]: Finished ignition-fetch.service. Dec 13 02:21:06.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:06.422671 systemd[1]: Starting ignition-kargs.service... Dec 13 02:21:06.452553 ignition[1066]: Ignition 2.14.0 Dec 13 02:21:06.452571 ignition[1066]: Stage: kargs Dec 13 02:21:06.452779 ignition[1066]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:06.452936 ignition[1066]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:06.463080 ignition[1066]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:06.464505 ignition[1066]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:06.466083 ignition[1066]: INFO : PUT result: OK Dec 13 02:21:06.471215 ignition[1066]: kargs: kargs passed Dec 13 02:21:06.471286 ignition[1066]: Ignition finished successfully Dec 13 02:21:06.473280 systemd[1]: Finished ignition-kargs.service. Dec 13 02:21:06.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:06.474725 systemd[1]: Starting ignition-disks.service... Dec 13 02:21:06.485535 ignition[1072]: Ignition 2.14.0 Dec 13 02:21:06.485560 ignition[1072]: Stage: disks Dec 13 02:21:06.485769 ignition[1072]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:06.485801 ignition[1072]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:06.521884 ignition[1072]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:06.527020 ignition[1072]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:06.531496 ignition[1072]: INFO : PUT result: OK Dec 13 02:21:06.543743 ignition[1072]: disks: disks passed Dec 13 02:21:06.543866 ignition[1072]: Ignition finished successfully Dec 13 02:21:06.547594 systemd[1]: Finished ignition-disks.service. Dec 13 02:21:06.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:06.547891 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:21:06.553755 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:21:06.554834 systemd[1]: Reached target local-fs.target. Dec 13 02:21:06.557482 systemd[1]: Reached target sysinit.target. Dec 13 02:21:06.558378 systemd[1]: Reached target basic.target. Dec 13 02:21:06.563456 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:21:06.619822 systemd-fsck[1080]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 02:21:06.623619 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:21:06.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:06.626760 systemd[1]: Mounting sysroot.mount... Dec 13 02:21:06.651568 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:21:06.652608 systemd[1]: Mounted sysroot.mount. Dec 13 02:21:06.653649 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:21:06.659187 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:21:06.669838 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 02:21:06.669914 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:21:06.669952 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:21:06.680000 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:21:06.698589 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:21:06.704317 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:21:06.718991 initrd-setup-root[1102]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:21:06.725819 initrd-setup-root[1110]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:21:06.732908 initrd-setup-root[1118]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:21:06.739570 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1097) Dec 13 02:21:06.745006 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:21:06.745304 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:21:06.745334 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:21:06.745373 initrd-setup-root[1126]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:21:06.788571 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:21:06.799295 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:21:06.910588 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:21:06.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:06.911789 systemd[1]: Starting ignition-mount.service... Dec 13 02:21:06.918231 systemd[1]: Starting sysroot-boot.service... Dec 13 02:21:06.933697 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:21:06.933887 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:21:06.965480 ignition[1163]: INFO : Ignition 2.14.0 Dec 13 02:21:06.967071 ignition[1163]: INFO : Stage: mount Dec 13 02:21:06.968675 ignition[1163]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:06.970230 ignition[1163]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:06.981381 systemd[1]: Finished sysroot-boot.service. Dec 13 02:21:06.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:06.985907 ignition[1163]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:06.987300 ignition[1163]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:06.990615 ignition[1163]: INFO : PUT result: OK Dec 13 02:21:06.994780 ignition[1163]: INFO : mount: mount passed Dec 13 02:21:06.995735 ignition[1163]: INFO : Ignition finished successfully Dec 13 02:21:06.998447 systemd[1]: Finished ignition-mount.service. Dec 13 02:21:06.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:07.000413 systemd[1]: Starting ignition-files.service... Dec 13 02:21:07.023375 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:21:07.050656 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1172) Dec 13 02:21:07.063368 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:21:07.063447 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 02:21:07.063465 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 02:21:07.072574 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 02:21:07.076401 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:21:07.090781 ignition[1191]: INFO : Ignition 2.14.0 Dec 13 02:21:07.090781 ignition[1191]: INFO : Stage: files Dec 13 02:21:07.093586 ignition[1191]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:07.093586 ignition[1191]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:07.105876 ignition[1191]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:07.108193 ignition[1191]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:07.110395 ignition[1191]: INFO : PUT result: OK Dec 13 02:21:07.115786 ignition[1191]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:21:07.124270 ignition[1191]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:21:07.126608 ignition[1191]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:21:07.139183 ignition[1191]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:21:07.140975 ignition[1191]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:21:07.144848 unknown[1191]: wrote ssh authorized keys file for user: core Dec 13 02:21:07.146368 ignition[1191]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:21:07.149402 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:21:07.151427 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:21:07.151427 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:21:07.151427 ignition[1191]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:21:07.211690 ignition[1191]: INFO : GET result: OK Dec 13 02:21:07.491188 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:21:07.491188 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:21:07.507193 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:21:07.507193 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:21:07.512621 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:21:07.512621 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:21:07.512621 ignition[1191]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:21:07.523897 ignition[1191]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3265521153" Dec 13 02:21:07.527169 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1194) Dec 13 02:21:07.527197 ignition[1191]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3265521153": device or resource busy Dec 13 02:21:07.527197 ignition[1191]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3265521153", trying btrfs: device or resource busy Dec 13 02:21:07.527197 ignition[1191]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3265521153" Dec 13 02:21:07.527197 ignition[1191]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3265521153" Dec 13 02:21:07.536280 ignition[1191]: INFO : op(3): [started] unmounting "/mnt/oem3265521153" Dec 13 02:21:07.536280 ignition[1191]: INFO : op(3): [finished] unmounting "/mnt/oem3265521153" Dec 13 02:21:07.538619 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 02:21:07.538619 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:21:07.543429 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:21:07.543429 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:21:07.543429 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:21:07.543429 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:21:07.543429 ignition[1191]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 02:21:07.744729 systemd-networkd[1036]: eth0: Gained IPv6LL Dec 13 02:21:08.090363 ignition[1191]: INFO : GET result: OK Dec 13 02:21:08.283086 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:21:08.285692 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:21:08.285692 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:21:08.285692 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:21:08.285692 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:21:08.299224 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:21:08.299224 ignition[1191]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:21:08.310829 ignition[1191]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2225146014" Dec 13 02:21:08.313392 ignition[1191]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2225146014": device or resource busy Dec 13 02:21:08.313392 ignition[1191]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2225146014", trying btrfs: device or resource busy Dec 13 02:21:08.313392 ignition[1191]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2225146014" Dec 13 02:21:08.313392 ignition[1191]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2225146014" Dec 13 02:21:08.313392 ignition[1191]: INFO : op(6): [started] unmounting "/mnt/oem2225146014" Dec 13 02:21:08.313392 ignition[1191]: INFO : op(6): [finished] unmounting "/mnt/oem2225146014" Dec 13 02:21:08.313392 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 02:21:08.313392 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:21:08.331139 ignition[1191]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:21:08.326478 systemd[1]: mnt-oem2225146014.mount: Deactivated successfully. Dec 13 02:21:08.832394 ignition[1191]: INFO : GET result: OK Dec 13 02:21:09.362631 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:21:09.362631 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:21:09.368310 ignition[1191]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:21:09.372804 ignition[1191]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1703050214" Dec 13 02:21:09.375702 ignition[1191]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1703050214": device or resource busy Dec 13 02:21:09.375702 ignition[1191]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1703050214", trying btrfs: device or resource busy Dec 13 02:21:09.375702 ignition[1191]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1703050214" Dec 13 02:21:09.375702 ignition[1191]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1703050214" Dec 13 02:21:09.384486 ignition[1191]: INFO : op(9): [started] unmounting "/mnt/oem1703050214" Dec 13 02:21:09.384486 ignition[1191]: INFO : op(9): [finished] unmounting "/mnt/oem1703050214" Dec 13 02:21:09.384486 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 02:21:09.384486 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:21:09.384486 ignition[1191]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:21:09.394500 systemd[1]: mnt-oem1703050214.mount: Deactivated successfully. Dec 13 02:21:09.409520 ignition[1191]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2750336652" Dec 13 02:21:09.411392 ignition[1191]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2750336652": device or resource busy Dec 13 02:21:09.411392 ignition[1191]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2750336652", trying btrfs: device or resource busy Dec 13 02:21:09.411392 ignition[1191]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2750336652" Dec 13 02:21:09.411392 ignition[1191]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2750336652" Dec 13 02:21:09.418727 ignition[1191]: INFO : op(c): [started] unmounting "/mnt/oem2750336652" Dec 13 02:21:09.418727 ignition[1191]: INFO : op(c): [finished] unmounting "/mnt/oem2750336652" Dec 13 02:21:09.418727 ignition[1191]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 02:21:09.418727 ignition[1191]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:21:09.418727 ignition[1191]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:21:09.418727 ignition[1191]: INFO : files: op(12): [started] processing unit "amazon-ssm-agent.service" Dec 13 02:21:09.418727 ignition[1191]: INFO : files: op(12): op(13): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:21:09.418727 ignition[1191]: INFO : files: op(12): op(13): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 02:21:09.418727 ignition[1191]: INFO : files: op(12): [finished] processing unit "amazon-ssm-agent.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(14): [started] processing unit "nvidia.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(14): [finished] processing unit "nvidia.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(15): [started] processing unit "containerd.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(15): [finished] processing unit "containerd.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(17): [started] processing unit "prepare-helm.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(17): op(18): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(17): op(18): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(17): [finished] processing unit "prepare-helm.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(19): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(19): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(1a): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(1a): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(1c): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: op(1c): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: createResultFile: createFiles: op(1d): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: createResultFile: createFiles: op(1d): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:21:09.439032 ignition[1191]: INFO : files: files passed Dec 13 02:21:09.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.489415 ignition[1191]: INFO : Ignition finished successfully Dec 13 02:21:09.453827 systemd[1]: Finished ignition-files.service. Dec 13 02:21:09.498133 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:21:09.499497 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:21:09.500326 systemd[1]: Starting ignition-quench.service... Dec 13 02:21:09.511126 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:21:09.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.511236 systemd[1]: Finished ignition-quench.service. Dec 13 02:21:09.522060 initrd-setup-root-after-ignition[1216]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:21:09.525477 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:21:09.526000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.528797 systemd[1]: Reached target ignition-complete.target. Dec 13 02:21:09.532887 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:21:09.560522 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:21:09.560665 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:21:09.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.562906 systemd[1]: Reached target initrd-fs.target. Dec 13 02:21:09.565952 systemd[1]: Reached target initrd.target. Dec 13 02:21:09.566091 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:21:09.567137 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:21:09.609319 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:21:09.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.621701 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:21:09.649126 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:21:09.651286 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:21:09.655482 systemd[1]: Stopped target timers.target. Dec 13 02:21:09.659686 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:21:09.661055 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:21:09.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.663330 systemd[1]: Stopped target initrd.target. Dec 13 02:21:09.665468 systemd[1]: Stopped target basic.target. Dec 13 02:21:09.667307 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:21:09.669518 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:21:09.671466 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:21:09.674127 systemd[1]: Stopped target remote-fs.target. Dec 13 02:21:09.676577 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:21:09.678535 systemd[1]: Stopped target sysinit.target. Dec 13 02:21:09.680250 systemd[1]: Stopped target local-fs.target. Dec 13 02:21:09.689210 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:21:09.696804 systemd[1]: Stopped target swap.target. Dec 13 02:21:09.699469 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:21:09.702579 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:21:09.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.702965 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:21:09.712149 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:21:09.713822 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:21:09.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.724163 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:21:09.724322 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:21:09.725954 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:21:09.728000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.726060 systemd[1]: Stopped ignition-files.service. Dec 13 02:21:09.733489 systemd[1]: Stopping ignition-mount.service... Dec 13 02:21:09.757723 iscsid[1041]: iscsid shutting down. Dec 13 02:21:09.753805 systemd[1]: Stopping iscsid.service... Dec 13 02:21:09.762680 ignition[1229]: INFO : Ignition 2.14.0 Dec 13 02:21:09.762680 ignition[1229]: INFO : Stage: umount Dec 13 02:21:09.762680 ignition[1229]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:21:09.762680 ignition[1229]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 02:21:09.762680 ignition[1229]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 02:21:09.762680 ignition[1229]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 02:21:09.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.760675 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:21:09.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.791161 ignition[1229]: INFO : PUT result: OK Dec 13 02:21:09.761012 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:21:09.764119 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:21:09.782227 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:21:09.782471 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:21:09.785152 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:21:09.785327 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:21:09.788793 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 02:21:09.789260 systemd[1]: Stopped iscsid.service. Dec 13 02:21:09.807713 ignition[1229]: INFO : umount: umount passed Dec 13 02:21:09.807713 ignition[1229]: INFO : Ignition finished successfully Dec 13 02:21:09.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.812376 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:21:09.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.812574 systemd[1]: Stopped ignition-mount.service. Dec 13 02:21:09.814180 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:21:09.814323 systemd[1]: Stopped ignition-disks.service. Dec 13 02:21:09.815417 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:21:09.815476 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:21:09.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.823581 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 02:21:09.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.823703 systemd[1]: Stopped ignition-fetch.service. Dec 13 02:21:09.827070 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:21:09.827129 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:21:09.838000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.839927 systemd[1]: Stopped target paths.target. Dec 13 02:21:09.840791 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:21:09.845633 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:21:09.849306 systemd[1]: Stopped target slices.target. Dec 13 02:21:09.853999 systemd[1]: Stopped target sockets.target. Dec 13 02:21:09.858781 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:21:09.858833 systemd[1]: Closed iscsid.socket. Dec 13 02:21:09.861249 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:21:09.861309 systemd[1]: Stopped ignition-setup.service. Dec 13 02:21:09.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.864460 systemd[1]: Stopping iscsiuio.service... Dec 13 02:21:09.870135 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:21:09.870740 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:21:09.871225 systemd[1]: Stopped iscsiuio.service. Dec 13 02:21:09.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.872644 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:21:09.872730 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:21:09.875219 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:21:09.875331 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:21:09.878754 systemd[1]: Stopped target network.target. Dec 13 02:21:09.880931 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:21:09.880981 systemd[1]: Closed iscsiuio.socket. Dec 13 02:21:09.884118 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:21:09.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.884179 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:21:09.889717 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:21:09.891966 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:21:09.894614 systemd-networkd[1036]: eth0: DHCPv6 lease lost Dec 13 02:21:09.896000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.896749 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:21:09.896930 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:21:09.901253 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:21:09.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.901378 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:21:09.906000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:21:09.908302 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:21:09.908357 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:21:09.906000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:21:09.911237 systemd[1]: Stopping network-cleanup.service... Dec 13 02:21:09.913718 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:21:09.913790 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:21:09.917000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.918402 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:21:09.918663 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:21:09.925074 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:21:09.927000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.925134 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:21:09.929333 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:21:09.939517 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:21:09.944939 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:21:09.945244 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:21:09.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.948317 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:21:09.948380 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:21:09.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.950511 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:21:09.950687 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:21:09.950800 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:21:09.950847 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:21:09.951007 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:21:09.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:09.951044 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:21:09.951140 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:21:09.951174 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:21:09.952518 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:21:09.952960 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:21:09.953021 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:21:09.964501 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:21:09.964621 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:21:09.966933 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:21:09.967412 systemd[1]: Stopped network-cleanup.service. Dec 13 02:21:09.970012 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:21:09.973092 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:21:10.002733 systemd[1]: Switching root. Dec 13 02:21:10.003000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:21:10.003000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:21:10.006000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:21:10.006000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:21:10.006000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:21:10.025821 systemd-journald[185]: Journal stopped Dec 13 02:21:16.046836 systemd-journald[185]: Received SIGTERM from PID 1 (systemd). Dec 13 02:21:16.052105 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:21:16.052156 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:21:16.052175 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:21:16.052196 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:21:16.052215 kernel: SELinux: policy capability open_perms=1 Dec 13 02:21:16.052235 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:21:16.052265 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:21:16.052285 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:21:16.052307 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:21:16.052324 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:21:16.052340 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:21:16.052357 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 13 02:21:16.052380 kernel: audit: type=1403 audit(1734056470.764:83): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 02:21:16.052402 systemd[1]: Successfully loaded SELinux policy in 105.105ms. Dec 13 02:21:16.052433 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.149ms. Dec 13 02:21:16.052456 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:21:16.052475 systemd[1]: Detected virtualization amazon. Dec 13 02:21:16.052494 systemd[1]: Detected architecture x86-64. Dec 13 02:21:16.052513 systemd[1]: Detected first boot. Dec 13 02:21:16.052533 systemd[1]: Initializing machine ID from VM UUID. Dec 13 02:21:16.052574 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:21:16.052594 kernel: audit: type=1400 audit(1734056471.189:84): avc: denied { associate } for pid=1279 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 02:21:16.052618 kernel: audit: type=1300 audit(1734056471.189:84): arch=c000003e syscall=188 success=yes exit=0 a0=c0001196c2 a1=c00002cb40 a2=c00002aa40 a3=32 items=0 ppid=1262 pid=1279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:21:16.052636 kernel: audit: type=1327 audit(1734056471.189:84): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:21:16.052655 kernel: audit: type=1400 audit(1734056471.194:85): avc: denied { associate } for pid=1279 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 02:21:16.052673 kernel: audit: type=1300 audit(1734056471.194:85): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000119799 a2=1ed a3=0 items=2 ppid=1262 pid=1279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:21:16.052692 kernel: audit: type=1307 audit(1734056471.194:85): cwd="/" Dec 13 02:21:16.052713 kernel: audit: type=1302 audit(1734056471.194:85): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:16.052731 kernel: audit: type=1302 audit(1734056471.194:85): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:16.052748 kernel: audit: type=1327 audit(1734056471.194:85): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 02:21:16.052768 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:21:16.052791 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:21:16.052811 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:21:16.052833 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:21:16.052854 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:21:16.052873 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:21:16.052891 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:21:16.052909 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:21:16.052928 systemd[1]: Created slice system-getty.slice. Dec 13 02:21:16.052947 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:21:16.052968 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:21:16.052990 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:21:16.053009 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:21:16.053029 systemd[1]: Created slice user.slice. Dec 13 02:21:16.053047 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:21:16.053066 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:21:16.053083 systemd[1]: Set up automount boot.automount. Dec 13 02:21:16.053101 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:21:16.053120 systemd[1]: Reached target integritysetup.target. Dec 13 02:21:16.053144 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:21:16.053166 systemd[1]: Reached target remote-fs.target. Dec 13 02:21:16.053183 systemd[1]: Reached target slices.target. Dec 13 02:21:16.053202 systemd[1]: Reached target swap.target. Dec 13 02:21:16.053220 systemd[1]: Reached target torcx.target. Dec 13 02:21:16.053237 systemd[1]: Reached target veritysetup.target. Dec 13 02:21:16.053255 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:21:16.053274 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:21:16.053291 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:21:16.053309 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:21:16.053328 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:21:16.059630 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:21:16.059688 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:21:16.059720 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:21:16.059820 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:21:16.059843 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:21:16.059864 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:21:16.059883 systemd[1]: Mounting media.mount... Dec 13 02:21:16.059904 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:21:16.059923 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:21:16.059950 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:21:16.059971 systemd[1]: Mounting tmp.mount... Dec 13 02:21:16.059988 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:21:16.060014 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:21:16.060046 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:21:16.060073 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:21:16.060091 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:21:16.060109 systemd[1]: Starting modprobe@drm.service... Dec 13 02:21:16.060128 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:21:16.060148 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:21:16.060169 systemd[1]: Starting modprobe@loop.service... Dec 13 02:21:16.060190 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:21:16.060211 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 02:21:16.060233 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 02:21:16.068020 systemd[1]: Starting systemd-journald.service... Dec 13 02:21:16.068068 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:21:16.068089 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:21:16.068109 kernel: loop: module loaded Dec 13 02:21:16.068128 kernel: fuse: init (API version 7.34) Dec 13 02:21:16.068147 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:21:16.068166 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:21:16.068186 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:21:16.068204 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:21:16.068229 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:21:16.068248 systemd[1]: Mounted media.mount. Dec 13 02:21:16.068265 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:21:16.068284 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:21:16.068301 systemd[1]: Mounted tmp.mount. Dec 13 02:21:16.068320 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:21:16.068338 kernel: kauditd_printk_skb: 2 callbacks suppressed Dec 13 02:21:16.068357 kernel: audit: type=1130 audit(1734056476.015:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.068376 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:21:16.068397 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:21:16.068416 kernel: audit: type=1130 audit(1734056476.028:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.068434 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:21:16.068453 kernel: audit: type=1131 audit(1734056476.028:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.068470 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:21:16.068489 kernel: audit: type=1305 audit(1734056476.037:91): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:21:16.068507 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:21:16.068526 systemd[1]: Finished modprobe@drm.service. Dec 13 02:21:16.068563 kernel: audit: type=1300 audit(1734056476.037:91): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffebd320960 a2=4000 a3=7ffebd3209fc items=0 ppid=1 pid=1374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:21:16.068582 kernel: audit: type=1327 audit(1734056476.037:91): proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:21:16.068600 kernel: audit: type=1130 audit(1734056476.042:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.068618 kernel: audit: type=1131 audit(1734056476.042:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.068636 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:21:16.068654 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:21:16.068685 systemd-journald[1374]: Journal started Dec 13 02:21:16.068762 systemd-journald[1374]: Runtime Journal (/run/log/journal/ec214f889ba1a8a8b085b624f2426079) is 4.8M, max 38.7M, 33.9M free. Dec 13 02:21:15.701000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:21:16.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.037000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:21:16.037000 audit[1374]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffebd320960 a2=4000 a3=7ffebd3209fc items=0 ppid=1 pid=1374 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:21:16.037000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:21:16.074598 kernel: audit: type=1130 audit(1734056476.061:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.076925 systemd[1]: Started systemd-journald.service. Dec 13 02:21:16.089703 kernel: audit: type=1131 audit(1734056476.061:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.061000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.083000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.078604 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:21:16.078847 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:21:16.080148 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:21:16.080349 systemd[1]: Finished modprobe@loop.service. Dec 13 02:21:16.085777 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:21:16.087249 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:21:16.088714 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:21:16.090220 systemd[1]: Reached target network-pre.target. Dec 13 02:21:16.092914 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:21:16.098986 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:21:16.099953 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:21:16.105693 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:21:16.110950 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:21:16.112726 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:21:16.114367 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:21:16.117786 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:21:16.120791 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:21:16.130972 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:21:16.133859 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:21:16.140381 systemd-journald[1374]: Time spent on flushing to /var/log/journal/ec214f889ba1a8a8b085b624f2426079 is 77.137ms for 1141 entries. Dec 13 02:21:16.140381 systemd-journald[1374]: System Journal (/var/log/journal/ec214f889ba1a8a8b085b624f2426079) is 8.0M, max 195.6M, 187.6M free. Dec 13 02:21:16.229795 systemd-journald[1374]: Received client request to flush runtime journal. Dec 13 02:21:16.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.152064 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:21:16.158633 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:21:16.211310 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:21:16.231043 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:21:16.256918 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:21:16.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.259682 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:21:16.272323 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:21:16.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.278441 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:21:16.283681 udevadm[1424]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:21:16.363042 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:21:16.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.366134 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:21:16.470000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.470767 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:21:16.968944 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:21:16.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:16.971927 systemd[1]: Starting systemd-udevd.service... Dec 13 02:21:16.993119 systemd-udevd[1433]: Using default interface naming scheme 'v252'. Dec 13 02:21:17.052635 systemd[1]: Started systemd-udevd.service. Dec 13 02:21:17.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:17.056019 systemd[1]: Starting systemd-networkd.service... Dec 13 02:21:17.097624 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:21:17.107686 (udev-worker)[1448]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:21:17.110944 systemd[1]: Found device dev-ttyS0.device. Dec 13 02:21:17.186571 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Dec 13 02:21:17.199564 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:21:17.205866 systemd[1]: Started systemd-userdbd.service. Dec 13 02:21:17.207380 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Dec 13 02:21:17.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:17.230637 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 02:21:17.284000 audit[1449]: AVC avc: denied { confidentiality } for pid=1449 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:21:17.284000 audit[1449]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55ff1923d0d0 a1=337fc a2=7f0ea042abc5 a3=5 items=110 ppid=1433 pid=1449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:21:17.284000 audit: CWD cwd="/" Dec 13 02:21:17.284000 audit: PATH item=0 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=1 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=2 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=3 name=(null) inode=14126 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=4 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=5 name=(null) inode=14127 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=6 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=7 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=8 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=9 name=(null) inode=14129 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=10 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=11 name=(null) inode=14130 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=12 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=13 name=(null) inode=14131 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=14 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=15 name=(null) inode=14132 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=16 name=(null) inode=14128 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=17 name=(null) inode=14133 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=18 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=19 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=20 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=21 name=(null) inode=14135 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=22 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=23 name=(null) inode=14136 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=24 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=25 name=(null) inode=14137 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=26 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=27 name=(null) inode=14138 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=28 name=(null) inode=14134 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=29 name=(null) inode=14139 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=30 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=31 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=32 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=33 name=(null) inode=14141 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=34 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=35 name=(null) inode=14142 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=36 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=37 name=(null) inode=14143 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=38 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=39 name=(null) inode=14144 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=40 name=(null) inode=14140 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=41 name=(null) inode=14145 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=42 name=(null) inode=14125 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=43 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=44 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=45 name=(null) inode=14147 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=46 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=47 name=(null) inode=14148 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=48 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=49 name=(null) inode=14149 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=50 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=51 name=(null) inode=14150 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=52 name=(null) inode=14146 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=53 name=(null) inode=14151 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=54 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=55 name=(null) inode=14152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=56 name=(null) inode=14152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=57 name=(null) inode=14153 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=58 name=(null) inode=14152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=59 name=(null) inode=14154 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=60 name=(null) inode=14152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=61 name=(null) inode=14155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=62 name=(null) inode=14155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=63 name=(null) inode=14156 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=64 name=(null) inode=14155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=65 name=(null) inode=14157 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=66 name=(null) inode=14155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=67 name=(null) inode=14158 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=68 name=(null) inode=14155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=69 name=(null) inode=14159 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=70 name=(null) inode=14155 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=71 name=(null) inode=14160 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=72 name=(null) inode=14152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=73 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=74 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=75 name=(null) inode=14162 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=76 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=77 name=(null) inode=14163 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=78 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=79 name=(null) inode=14164 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=80 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=81 name=(null) inode=14165 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=82 name=(null) inode=14161 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=83 name=(null) inode=14166 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=84 name=(null) inode=14152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=85 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=86 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=87 name=(null) inode=14168 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=88 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=89 name=(null) inode=14169 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=90 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=91 name=(null) inode=14170 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=92 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=93 name=(null) inode=14171 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=94 name=(null) inode=14167 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=95 name=(null) inode=14172 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=96 name=(null) inode=14152 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=97 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=98 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=99 name=(null) inode=14174 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=100 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=101 name=(null) inode=14175 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=102 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=103 name=(null) inode=14176 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=104 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=105 name=(null) inode=14177 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=106 name=(null) inode=14173 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=107 name=(null) inode=14178 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PATH item=109 name=(null) inode=14179 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:21:17.284000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:21:17.325568 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 02:21:17.326189 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 02:21:17.356560 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:21:17.366779 systemd-networkd[1439]: lo: Link UP Dec 13 02:21:17.366789 systemd-networkd[1439]: lo: Gained carrier Dec 13 02:21:17.367352 systemd-networkd[1439]: Enumeration completed Dec 13 02:21:17.367488 systemd-networkd[1439]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:21:17.367525 systemd[1]: Started systemd-networkd.service. Dec 13 02:21:17.370597 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:21:17.371261 systemd-networkd[1439]: eth0: Link UP Dec 13 02:21:17.371644 systemd-networkd[1439]: eth0: Gained carrier Dec 13 02:21:17.376580 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1434) Dec 13 02:21:17.379722 systemd-networkd[1439]: eth0: DHCPv4 address 172.31.30.169/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 02:21:17.521000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:17.535494 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:21:17.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:17.564041 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 02:21:17.569217 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:21:17.571634 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:21:17.633039 lvm[1547]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:21:17.662101 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:21:17.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:17.663392 systemd[1]: Reached target cryptsetup.target. Dec 13 02:21:17.666254 systemd[1]: Starting lvm2-activation.service... Dec 13 02:21:17.673263 lvm[1550]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:21:17.696200 systemd[1]: Finished lvm2-activation.service. Dec 13 02:21:17.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:17.697471 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:21:17.698454 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:21:17.698488 systemd[1]: Reached target local-fs.target. Dec 13 02:21:17.699568 systemd[1]: Reached target machines.target. Dec 13 02:21:17.702786 systemd[1]: Starting ldconfig.service... Dec 13 02:21:17.705325 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:21:17.705455 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:21:17.707894 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:21:17.712240 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:21:17.715104 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:21:17.717994 systemd[1]: Starting systemd-sysext.service... Dec 13 02:21:17.738832 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1557 (bootctl) Dec 13 02:21:17.741499 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:21:17.744837 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:21:17.757577 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:21:17.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:17.761056 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:21:17.761425 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:21:17.784586 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 02:21:17.903584 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:21:17.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:17.913036 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:21:17.914210 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:21:17.929574 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 02:21:17.944726 systemd-fsck[1569]: fsck.fat 4.2 (2021-01-31) Dec 13 02:21:17.944726 systemd-fsck[1569]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 02:21:17.947456 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:21:17.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:17.950931 systemd[1]: Mounting boot.mount... Dec 13 02:21:17.962580 (sd-sysext)[1573]: Using extensions 'kubernetes'. Dec 13 02:21:17.965238 (sd-sysext)[1573]: Merged extensions into '/usr'. Dec 13 02:21:17.994465 systemd[1]: Mounted boot.mount. Dec 13 02:21:18.004962 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:21:18.006793 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:21:18.008262 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:21:18.011203 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:21:18.013807 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:21:18.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.020166 systemd[1]: Starting modprobe@loop.service... Dec 13 02:21:18.021240 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:21:18.021452 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:21:18.021699 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:21:18.023320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:21:18.023614 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:21:18.033709 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:21:18.035770 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:21:18.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.037651 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:21:18.040502 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:21:18.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.040734 systemd[1]: Finished modprobe@loop.service. Dec 13 02:21:18.045652 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:21:18.046986 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:21:18.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.050355 systemd[1]: Finished systemd-sysext.service. Dec 13 02:21:18.053614 systemd[1]: Starting ensure-sysext.service... Dec 13 02:21:18.056576 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:21:18.079186 systemd[1]: Reloading. Dec 13 02:21:18.091251 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:21:18.100084 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:21:18.106098 systemd-tmpfiles[1602]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:21:18.157462 /usr/lib/systemd/system-generators/torcx-generator[1622]: time="2024-12-13T02:21:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:21:18.165465 /usr/lib/systemd/system-generators/torcx-generator[1622]: time="2024-12-13T02:21:18Z" level=info msg="torcx already run" Dec 13 02:21:18.427498 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:21:18.428436 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:21:18.491108 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:21:18.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.586979 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:21:18.588982 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:21:18.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.596271 systemd[1]: Starting audit-rules.service... Dec 13 02:21:18.600450 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:21:18.606379 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:21:18.613502 systemd[1]: Starting systemd-resolved.service... Dec 13 02:21:18.623001 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:21:18.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.631666 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:21:18.634087 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:21:18.639346 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:21:18.652000 audit[1689]: SYSTEM_BOOT pid=1689 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.658827 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:21:18.661319 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:21:18.666227 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:21:18.670878 systemd[1]: Starting modprobe@loop.service... Dec 13 02:21:18.673679 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:21:18.673975 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:21:18.674230 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:21:18.676871 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:21:18.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.677161 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:21:18.684707 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:21:18.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.690768 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:21:18.691673 systemd-networkd[1439]: eth0: Gained IPv6LL Dec 13 02:21:18.693985 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:21:18.695026 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:21:18.695412 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:21:18.695610 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:21:18.701216 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:21:18.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.703075 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:21:18.705263 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:21:18.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.706753 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:21:18.712799 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:21:18.714936 systemd[1]: Starting modprobe@drm.service... Dec 13 02:21:18.718254 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:21:18.719427 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:21:18.719667 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:21:18.719887 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:21:18.726125 systemd[1]: Finished ensure-sysext.service. Dec 13 02:21:18.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.727476 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:21:18.727764 systemd[1]: Finished modprobe@loop.service. Dec 13 02:21:18.739686 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:21:18.739936 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:21:18.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.741054 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:21:18.754302 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:21:18.754569 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:21:18.753000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.758280 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:21:18.758536 systemd[1]: Finished modprobe@drm.service. Dec 13 02:21:18.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.759788 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:21:18.759851 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:21:18.759872 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:21:18.791923 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:21:18.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:21:18.849000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:21:18.849000 audit[1722]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffef9217880 a2=420 a3=0 items=0 ppid=1683 pid=1722 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:21:18.849000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:21:18.851255 augenrules[1722]: No rules Dec 13 02:21:18.852980 systemd[1]: Finished audit-rules.service. Dec 13 02:21:18.869606 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:21:18.870805 systemd[1]: Reached target time-set.target. Dec 13 02:21:18.896471 systemd-resolved[1686]: Positive Trust Anchors: Dec 13 02:21:18.896487 systemd-resolved[1686]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:21:18.896528 systemd-resolved[1686]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:21:18.941623 systemd-resolved[1686]: Defaulting to hostname 'linux'. Dec 13 02:21:18.944039 systemd[1]: Started systemd-resolved.service. Dec 13 02:21:18.945404 systemd[1]: Reached target network.target. Dec 13 02:21:18.946606 systemd[1]: Reached target network-online.target. Dec 13 02:21:18.948774 systemd[1]: Reached target nss-lookup.target. Dec 13 02:21:18.990928 ldconfig[1556]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:21:19.002203 systemd[1]: Finished ldconfig.service. Dec 13 02:21:19.008218 systemd[1]: Starting systemd-update-done.service... Dec 13 02:21:19.019737 systemd[1]: Finished systemd-update-done.service. Dec 13 02:21:19.021049 systemd[1]: Reached target sysinit.target. Dec 13 02:21:19.022259 systemd[1]: Started motdgen.path. Dec 13 02:21:19.025490 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:21:19.028613 systemd[1]: Started logrotate.timer. Dec 13 02:21:19.029886 systemd[1]: Started mdadm.timer. Dec 13 02:21:19.030763 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:21:19.032241 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:21:19.032284 systemd[1]: Reached target paths.target. Dec 13 02:21:19.034485 systemd[1]: Reached target timers.target. Dec 13 02:21:19.036006 systemd[1]: Listening on dbus.socket. Dec 13 02:21:19.038182 systemd[1]: Starting docker.socket... Dec 13 02:21:19.041076 systemd[1]: Listening on sshd.socket. Dec 13 02:21:19.043079 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:21:19.043599 systemd[1]: Listening on docker.socket. Dec 13 02:21:19.044813 systemd[1]: Reached target sockets.target. Dec 13 02:21:19.048332 systemd[1]: Reached target basic.target. Dec 13 02:21:19.049834 systemd[1]: System is tainted: cgroupsv1 Dec 13 02:21:19.050031 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:21:19.050067 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:21:19.054372 systemd[1]: Started amazon-ssm-agent.service. Dec 13 02:21:19.058869 systemd[1]: Starting containerd.service... Dec 13 02:21:19.061306 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:21:19.064492 systemd[1]: Starting dbus.service... Dec 13 02:21:19.067364 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:21:19.071944 systemd[1]: Starting extend-filesystems.service... Dec 13 02:21:19.085464 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:21:19.089127 systemd[1]: Starting kubelet.service... Dec 13 02:21:19.095525 systemd[1]: Starting motdgen.service... Dec 13 02:21:19.098395 systemd[1]: Started nvidia.service. Dec 13 02:21:19.126698 systemd[1]: Starting prepare-helm.service... Dec 13 02:21:19.130946 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:21:19.135496 systemd[1]: Starting sshd-keygen.service... Dec 13 02:21:19.139689 systemd[1]: Starting systemd-logind.service... Dec 13 02:21:19.142822 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:21:19.142911 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 02:21:19.252114 jq[1738]: false Dec 13 02:21:19.148370 systemd[1]: Starting update-engine.service... Dec 13 02:21:19.153249 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:21:19.272887 jq[1751]: true Dec 13 02:21:19.240680 systemd-timesyncd[1687]: Contacted time server 45.84.199.136:123 (0.flatcar.pool.ntp.org). Dec 13 02:21:19.241060 systemd-timesyncd[1687]: Initial clock synchronization to Fri 2024-12-13 02:21:19.179761 UTC. Dec 13 02:21:19.355081 tar[1755]: linux-amd64/helm Dec 13 02:21:19.241810 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:21:19.242145 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:21:19.259037 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:21:19.259373 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:21:19.418878 jq[1763]: true Dec 13 02:21:19.443304 dbus-daemon[1737]: [system] SELinux support is enabled Dec 13 02:21:19.459079 systemd[1]: Started dbus.service. Dec 13 02:21:19.463484 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:21:19.463529 systemd[1]: Reached target system-config.target. Dec 13 02:21:19.464763 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:21:19.464879 systemd[1]: Reached target user-config.target. Dec 13 02:21:19.467568 extend-filesystems[1740]: Found loop1 Dec 13 02:21:19.521944 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:21:19.522508 systemd[1]: Finished motdgen.service. Dec 13 02:21:19.526942 dbus-daemon[1737]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1439 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 02:21:19.536407 extend-filesystems[1740]: Found nvme0n1 Dec 13 02:21:19.537669 extend-filesystems[1740]: Found nvme0n1p1 Dec 13 02:21:19.538604 extend-filesystems[1740]: Found nvme0n1p2 Dec 13 02:21:19.538604 extend-filesystems[1740]: Found nvme0n1p3 Dec 13 02:21:19.538604 extend-filesystems[1740]: Found usr Dec 13 02:21:19.538604 extend-filesystems[1740]: Found nvme0n1p4 Dec 13 02:21:19.542858 extend-filesystems[1740]: Found nvme0n1p6 Dec 13 02:21:19.542858 extend-filesystems[1740]: Found nvme0n1p7 Dec 13 02:21:19.542858 extend-filesystems[1740]: Found nvme0n1p9 Dec 13 02:21:19.542858 extend-filesystems[1740]: Checking size of /dev/nvme0n1p9 Dec 13 02:21:19.542159 dbus-daemon[1737]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 02:21:19.548002 systemd[1]: Starting systemd-hostnamed.service... Dec 13 02:21:19.603391 extend-filesystems[1740]: Resized partition /dev/nvme0n1p9 Dec 13 02:21:19.614350 amazon-ssm-agent[1734]: 2024/12/13 02:21:19 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 02:21:19.620195 amazon-ssm-agent[1734]: Initializing new seelog logger Dec 13 02:21:19.620381 amazon-ssm-agent[1734]: New Seelog Logger Creation Complete Dec 13 02:21:19.620485 amazon-ssm-agent[1734]: 2024/12/13 02:21:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:21:19.620485 amazon-ssm-agent[1734]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 02:21:19.620795 amazon-ssm-agent[1734]: 2024/12/13 02:21:19 processing appconfig overrides Dec 13 02:21:19.631161 extend-filesystems[1816]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:21:19.640259 update_engine[1750]: I1213 02:21:19.638989 1750 main.cc:92] Flatcar Update Engine starting Dec 13 02:21:19.643569 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 02:21:19.661445 systemd[1]: Started update-engine.service. Dec 13 02:21:19.662071 update_engine[1750]: I1213 02:21:19.661942 1750 update_check_scheduler.cc:74] Next update check in 4m9s Dec 13 02:21:19.665240 systemd[1]: Started locksmithd.service. Dec 13 02:21:19.786503 systemd-logind[1749]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 02:21:19.786533 systemd-logind[1749]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:21:19.787812 systemd-logind[1749]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 02:21:19.788166 systemd-logind[1749]: New seat seat0. Dec 13 02:21:19.790723 systemd[1]: Started systemd-logind.service. Dec 13 02:21:19.798616 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 02:21:19.801629 env[1758]: time="2024-12-13T02:21:19.800254938Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:21:19.830186 extend-filesystems[1816]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 02:21:19.830186 extend-filesystems[1816]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 02:21:19.830186 extend-filesystems[1816]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 02:21:19.854862 extend-filesystems[1740]: Resized filesystem in /dev/nvme0n1p9 Dec 13 02:21:19.858348 bash[1820]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:21:19.830720 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:21:19.831024 systemd[1]: Finished extend-filesystems.service. Dec 13 02:21:19.843237 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:21:19.956106 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 02:21:20.066091 env[1758]: time="2024-12-13T02:21:20.065528725Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:21:20.066229 env[1758]: time="2024-12-13T02:21:20.066167976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:21:20.069027 env[1758]: time="2024-12-13T02:21:20.068978869Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:21:20.069027 env[1758]: time="2024-12-13T02:21:20.069025548Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:21:20.069581 env[1758]: time="2024-12-13T02:21:20.069469040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:21:20.069691 env[1758]: time="2024-12-13T02:21:20.069582672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:21:20.069691 env[1758]: time="2024-12-13T02:21:20.069605271Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:21:20.069691 env[1758]: time="2024-12-13T02:21:20.069618679Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:21:20.069807 env[1758]: time="2024-12-13T02:21:20.069728104Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:21:20.070158 env[1758]: time="2024-12-13T02:21:20.070129078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:21:20.070458 env[1758]: time="2024-12-13T02:21:20.070429756Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:21:20.070515 env[1758]: time="2024-12-13T02:21:20.070461375Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:21:20.087677 env[1758]: time="2024-12-13T02:21:20.087623485Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:21:20.087677 env[1758]: time="2024-12-13T02:21:20.087677689Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:21:20.100663 env[1758]: time="2024-12-13T02:21:20.100611618Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:21:20.100800 env[1758]: time="2024-12-13T02:21:20.100676315Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:21:20.100800 env[1758]: time="2024-12-13T02:21:20.100712232Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:21:20.100800 env[1758]: time="2024-12-13T02:21:20.100770567Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:21:20.100800 env[1758]: time="2024-12-13T02:21:20.100790068Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:21:20.101263 env[1758]: time="2024-12-13T02:21:20.100980647Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:21:20.101263 env[1758]: time="2024-12-13T02:21:20.101008105Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:21:20.101263 env[1758]: time="2024-12-13T02:21:20.101126379Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:21:20.101263 env[1758]: time="2024-12-13T02:21:20.101148879Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:21:20.101263 env[1758]: time="2024-12-13T02:21:20.101197712Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:21:20.101263 env[1758]: time="2024-12-13T02:21:20.101217004Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:21:20.101263 env[1758]: time="2024-12-13T02:21:20.101240043Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:21:20.101769 env[1758]: time="2024-12-13T02:21:20.101615920Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:21:20.101769 env[1758]: time="2024-12-13T02:21:20.101734922Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:21:20.102469 env[1758]: time="2024-12-13T02:21:20.102440764Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:21:20.106974 env[1758]: time="2024-12-13T02:21:20.106927540Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:21:20.107156 env[1758]: time="2024-12-13T02:21:20.107138613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:21:20.107294 env[1758]: time="2024-12-13T02:21:20.107277107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:21:20.107760 env[1758]: time="2024-12-13T02:21:20.107737138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:21:20.107952 env[1758]: time="2024-12-13T02:21:20.107931551Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:21:20.108040 env[1758]: time="2024-12-13T02:21:20.108025839Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:21:20.108111 env[1758]: time="2024-12-13T02:21:20.108099439Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:21:20.109479 env[1758]: time="2024-12-13T02:21:20.109450089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:21:20.109619 env[1758]: time="2024-12-13T02:21:20.109602582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:21:20.109723 env[1758]: time="2024-12-13T02:21:20.109706532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:21:20.110353 env[1758]: time="2024-12-13T02:21:20.110331072Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:21:20.112784 env[1758]: time="2024-12-13T02:21:20.112753736Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:21:20.114172 env[1758]: time="2024-12-13T02:21:20.114144112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:21:20.114364 env[1758]: time="2024-12-13T02:21:20.114345119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:21:20.114462 env[1758]: time="2024-12-13T02:21:20.114445971Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:21:20.114585 env[1758]: time="2024-12-13T02:21:20.114560897Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:21:20.114669 env[1758]: time="2024-12-13T02:21:20.114655132Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:21:20.114750 env[1758]: time="2024-12-13T02:21:20.114735733Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:21:20.115141 env[1758]: time="2024-12-13T02:21:20.115118761Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:21:20.115610 env[1758]: time="2024-12-13T02:21:20.115521793Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:21:20.119733 env[1758]: time="2024-12-13T02:21:20.118610671Z" level=info msg="Connect containerd service" Dec 13 02:21:20.123909 env[1758]: time="2024-12-13T02:21:20.122624686Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:21:20.124087 env[1758]: time="2024-12-13T02:21:20.124047070Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:21:20.128682 env[1758]: time="2024-12-13T02:21:20.128617807Z" level=info msg="Start subscribing containerd event" Dec 13 02:21:20.129430 env[1758]: time="2024-12-13T02:21:20.129402900Z" level=info msg="Start recovering state" Dec 13 02:21:20.129650 env[1758]: time="2024-12-13T02:21:20.129635436Z" level=info msg="Start event monitor" Dec 13 02:21:20.129725 env[1758]: time="2024-12-13T02:21:20.129711367Z" level=info msg="Start snapshots syncer" Dec 13 02:21:20.130517 env[1758]: time="2024-12-13T02:21:20.130490997Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:21:20.133773 env[1758]: time="2024-12-13T02:21:20.133737681Z" level=info msg="Start streaming server" Dec 13 02:21:20.134166 env[1758]: time="2024-12-13T02:21:20.134146472Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:21:20.136747 env[1758]: time="2024-12-13T02:21:20.134660581Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:21:20.136747 env[1758]: time="2024-12-13T02:21:20.134996366Z" level=info msg="containerd successfully booted in 0.429400s" Dec 13 02:21:20.134881 systemd[1]: Started containerd.service. Dec 13 02:21:20.192777 dbus-daemon[1737]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 02:21:20.192984 systemd[1]: Started systemd-hostnamed.service. Dec 13 02:21:20.196852 dbus-daemon[1737]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1797 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 02:21:20.203045 systemd[1]: Starting polkit.service... Dec 13 02:21:20.249946 polkitd[1871]: Started polkitd version 121 Dec 13 02:21:20.286958 polkitd[1871]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 02:21:20.295530 polkitd[1871]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 02:21:20.303109 polkitd[1871]: Finished loading, compiling and executing 2 rules Dec 13 02:21:20.305883 dbus-daemon[1737]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 02:21:20.306110 systemd[1]: Started polkit.service. Dec 13 02:21:20.314004 polkitd[1871]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 02:21:20.351415 systemd-hostnamed[1797]: Hostname set to (transient) Dec 13 02:21:20.351580 systemd-resolved[1686]: System hostname changed to 'ip-172-31-30-169'. Dec 13 02:21:20.517991 coreos-metadata[1736]: Dec 13 02:21:20.515 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 02:21:20.525568 coreos-metadata[1736]: Dec 13 02:21:20.525 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 02:21:20.527066 coreos-metadata[1736]: Dec 13 02:21:20.526 INFO Fetch successful Dec 13 02:21:20.527235 coreos-metadata[1736]: Dec 13 02:21:20.527 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 02:21:20.528410 coreos-metadata[1736]: Dec 13 02:21:20.528 INFO Fetch successful Dec 13 02:21:20.532840 unknown[1736]: wrote ssh authorized keys file for user: core Dec 13 02:21:20.598214 update-ssh-keys[1918]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:21:20.598931 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:21:20.698249 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO Create new startup processor Dec 13 02:21:20.699020 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 02:21:20.699131 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO Initializing bookkeeping folders Dec 13 02:21:20.699200 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO removing the completed state files Dec 13 02:21:20.699399 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO Initializing bookkeeping folders for long running plugins Dec 13 02:21:20.699474 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 02:21:20.699570 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO Initializing healthcheck folders for long running plugins Dec 13 02:21:20.699764 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO Initializing locations for inventory plugin Dec 13 02:21:20.699858 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO Initializing default location for custom inventory Dec 13 02:21:20.700634 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO Initializing default location for file inventory Dec 13 02:21:20.701045 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO Initializing default location for role inventory Dec 13 02:21:20.701182 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO Init the cloudwatchlogs publisher Dec 13 02:21:20.703444 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [instanceID=i-008260bf43fe5d044] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 02:21:20.703528 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [instanceID=i-008260bf43fe5d044] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 02:21:20.703528 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [instanceID=i-008260bf43fe5d044] Successfully loaded platform independent plugin aws:runDocument Dec 13 02:21:20.703528 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [instanceID=i-008260bf43fe5d044] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 02:21:20.703528 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [instanceID=i-008260bf43fe5d044] Successfully loaded platform independent plugin aws:configurePackage Dec 13 02:21:20.703528 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [instanceID=i-008260bf43fe5d044] Successfully loaded platform independent plugin aws:downloadContent Dec 13 02:21:20.703528 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [instanceID=i-008260bf43fe5d044] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 02:21:20.703528 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [instanceID=i-008260bf43fe5d044] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 02:21:20.703804 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [instanceID=i-008260bf43fe5d044] Successfully loaded platform independent plugin aws:configureDocker Dec 13 02:21:20.703804 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [instanceID=i-008260bf43fe5d044] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 02:21:20.703804 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 02:21:20.703804 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO OS: linux, Arch: amd64 Dec 13 02:21:20.730671 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 02:21:20.732227 amazon-ssm-agent[1734]: datastore file /var/lib/amazon/ssm/i-008260bf43fe5d044/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 02:21:20.829032 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 02:21:20.924073 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 02:21:21.018472 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessagingDeliveryService] Starting message polling Dec 13 02:21:21.114055 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 02:21:21.208998 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [instanceID=i-008260bf43fe5d044] Starting association polling Dec 13 02:21:21.210527 tar[1755]: linux-amd64/LICENSE Dec 13 02:21:21.211075 tar[1755]: linux-amd64/README.md Dec 13 02:21:21.223030 systemd[1]: Finished prepare-helm.service. Dec 13 02:21:21.304083 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 02:21:21.400320 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 02:21:21.485109 locksmithd[1822]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:21:21.495808 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 02:21:21.591504 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 02:21:21.687340 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 02:21:21.783463 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 02:21:21.839756 sshd_keygen[1775]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:21:21.866571 systemd[1]: Finished sshd-keygen.service. Dec 13 02:21:21.870058 systemd[1]: Starting issuegen.service... Dec 13 02:21:21.879711 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 02:21:21.881740 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:21:21.882085 systemd[1]: Finished issuegen.service. Dec 13 02:21:21.885959 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:21:21.897281 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:21:21.900476 systemd[1]: Started getty@tty1.service. Dec 13 02:21:21.903502 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 02:21:21.904999 systemd[1]: Reached target getty.target. Dec 13 02:21:21.976119 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 02:21:22.072855 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-008260bf43fe5d044, requestId: 6ac6f5a5-3a2e-4491-98d8-322f97c29975 Dec 13 02:21:22.169699 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [OfflineService] Starting document processing engine... Dec 13 02:21:22.266820 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [OfflineService] [EngineProcessor] Starting Dec 13 02:21:22.317440 systemd[1]: Started kubelet.service. Dec 13 02:21:22.318897 systemd[1]: Reached target multi-user.target. Dec 13 02:21:22.321950 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:21:22.335760 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:21:22.336120 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:21:22.345907 systemd[1]: Startup finished in 10.845s (kernel) + 11.710s (userspace) = 22.556s. Dec 13 02:21:22.364028 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 02:21:22.461719 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [OfflineService] Starting message polling Dec 13 02:21:22.559376 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [OfflineService] Starting send replies to MDS Dec 13 02:21:22.657313 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 02:21:22.755319 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 02:21:22.853566 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 02:21:22.952071 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 02:21:23.050604 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [StartupProcessor] Executing startup processor tasks Dec 13 02:21:23.149403 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 02:21:23.248715 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 02:21:23.348050 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 02:21:23.447388 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessageGatewayService] listening reply. Dec 13 02:21:23.547305 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-008260bf43fe5d044?role=subscribe&stream=input Dec 13 02:21:23.647181 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-008260bf43fe5d044?role=subscribe&stream=input Dec 13 02:21:23.748326 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 02:21:23.758609 kubelet[1971]: E1213 02:21:23.758477 1971 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:21:23.760860 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:21:23.761073 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:21:23.848296 amazon-ssm-agent[1734]: 2024-12-13 02:21:20 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 02:21:28.702507 systemd[1]: Created slice system-sshd.slice. Dec 13 02:21:28.707979 systemd[1]: Started sshd@0-172.31.30.169:22-139.178.68.195:50198.service. Dec 13 02:21:28.917882 sshd[1980]: Accepted publickey for core from 139.178.68.195 port 50198 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:28.921217 sshd[1980]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:28.935112 systemd[1]: Created slice user-500.slice. Dec 13 02:21:28.936496 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:21:28.939944 systemd-logind[1749]: New session 1 of user core. Dec 13 02:21:28.955765 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:21:28.958191 systemd[1]: Starting user@500.service... Dec 13 02:21:28.964672 (systemd)[1985]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:29.093994 systemd[1985]: Queued start job for default target default.target. Dec 13 02:21:29.094321 systemd[1985]: Reached target paths.target. Dec 13 02:21:29.094347 systemd[1985]: Reached target sockets.target. Dec 13 02:21:29.094366 systemd[1985]: Reached target timers.target. Dec 13 02:21:29.094383 systemd[1985]: Reached target basic.target. Dec 13 02:21:29.094571 systemd[1]: Started user@500.service. Dec 13 02:21:29.097049 systemd[1]: Started session-1.scope. Dec 13 02:21:29.097598 systemd[1985]: Reached target default.target. Dec 13 02:21:29.097899 systemd[1985]: Startup finished in 124ms. Dec 13 02:21:29.236789 systemd[1]: Started sshd@1-172.31.30.169:22-139.178.68.195:50206.service. Dec 13 02:21:29.411563 sshd[1994]: Accepted publickey for core from 139.178.68.195 port 50206 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:29.413033 sshd[1994]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:29.419970 systemd-logind[1749]: New session 2 of user core. Dec 13 02:21:29.420640 systemd[1]: Started session-2.scope. Dec 13 02:21:29.544868 sshd[1994]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:29.549750 systemd[1]: sshd@1-172.31.30.169:22-139.178.68.195:50206.service: Deactivated successfully. Dec 13 02:21:29.552505 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 02:21:29.553351 systemd-logind[1749]: Session 2 logged out. Waiting for processes to exit. Dec 13 02:21:29.558760 systemd-logind[1749]: Removed session 2. Dec 13 02:21:29.572531 systemd[1]: Started sshd@2-172.31.30.169:22-139.178.68.195:50208.service. Dec 13 02:21:29.747585 sshd[2001]: Accepted publickey for core from 139.178.68.195 port 50208 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:29.749005 sshd[2001]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:29.754287 systemd-logind[1749]: New session 3 of user core. Dec 13 02:21:29.754898 systemd[1]: Started session-3.scope. Dec 13 02:21:29.877982 sshd[2001]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:29.882219 systemd[1]: sshd@2-172.31.30.169:22-139.178.68.195:50208.service: Deactivated successfully. Dec 13 02:21:29.883840 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 02:21:29.883858 systemd-logind[1749]: Session 3 logged out. Waiting for processes to exit. Dec 13 02:21:29.885652 systemd-logind[1749]: Removed session 3. Dec 13 02:21:29.902610 systemd[1]: Started sshd@3-172.31.30.169:22-139.178.68.195:50222.service. Dec 13 02:21:30.067535 sshd[2008]: Accepted publickey for core from 139.178.68.195 port 50222 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:30.069060 sshd[2008]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:30.080631 systemd-logind[1749]: New session 4 of user core. Dec 13 02:21:30.081280 systemd[1]: Started session-4.scope. Dec 13 02:21:30.219037 sshd[2008]: pam_unix(sshd:session): session closed for user core Dec 13 02:21:30.222824 systemd[1]: sshd@3-172.31.30.169:22-139.178.68.195:50222.service: Deactivated successfully. Dec 13 02:21:30.224490 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:21:30.225171 systemd-logind[1749]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:21:30.227041 systemd-logind[1749]: Removed session 4. Dec 13 02:21:30.247991 systemd[1]: Started sshd@4-172.31.30.169:22-139.178.68.195:50236.service. Dec 13 02:21:30.422132 sshd[2015]: Accepted publickey for core from 139.178.68.195 port 50236 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:21:30.425248 sshd[2015]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:21:30.448625 systemd-logind[1749]: New session 5 of user core. Dec 13 02:21:30.449589 systemd[1]: Started session-5.scope. Dec 13 02:21:30.607189 sudo[2019]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:21:30.608778 sudo[2019]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:21:30.658794 systemd[1]: Starting docker.service... Dec 13 02:21:30.733384 env[2029]: time="2024-12-13T02:21:30.733318355Z" level=info msg="Starting up" Dec 13 02:21:30.735454 env[2029]: time="2024-12-13T02:21:30.735421857Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:21:30.735590 env[2029]: time="2024-12-13T02:21:30.735571699Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:21:30.735759 env[2029]: time="2024-12-13T02:21:30.735647340Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:21:30.735759 env[2029]: time="2024-12-13T02:21:30.735666210Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:21:30.737766 env[2029]: time="2024-12-13T02:21:30.737735264Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:21:30.737766 env[2029]: time="2024-12-13T02:21:30.737755302Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:21:30.737902 env[2029]: time="2024-12-13T02:21:30.737779296Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:21:30.737902 env[2029]: time="2024-12-13T02:21:30.737791085Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:21:30.762517 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1027343983-merged.mount: Deactivated successfully. Dec 13 02:21:30.982035 env[2029]: time="2024-12-13T02:21:30.981926547Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 02:21:30.982035 env[2029]: time="2024-12-13T02:21:30.981955003Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 02:21:30.982864 env[2029]: time="2024-12-13T02:21:30.982827551Z" level=info msg="Loading containers: start." Dec 13 02:21:31.208669 kernel: Initializing XFRM netlink socket Dec 13 02:21:31.291281 env[2029]: time="2024-12-13T02:21:31.290957997Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 02:21:31.292671 (udev-worker)[2039]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:21:31.470401 systemd-networkd[1439]: docker0: Link UP Dec 13 02:21:31.488510 env[2029]: time="2024-12-13T02:21:31.488473893Z" level=info msg="Loading containers: done." Dec 13 02:21:31.517214 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3017884949-merged.mount: Deactivated successfully. Dec 13 02:21:31.527064 env[2029]: time="2024-12-13T02:21:31.527010401Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:21:31.527382 env[2029]: time="2024-12-13T02:21:31.527238580Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 02:21:31.527481 env[2029]: time="2024-12-13T02:21:31.527458028Z" level=info msg="Daemon has completed initialization" Dec 13 02:21:31.548354 systemd[1]: Started docker.service. Dec 13 02:21:31.564466 env[2029]: time="2024-12-13T02:21:31.564341005Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:21:33.323408 env[1758]: time="2024-12-13T02:21:33.323365651Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 02:21:33.897822 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:21:33.898470 systemd[1]: Stopped kubelet.service. Dec 13 02:21:33.909085 systemd[1]: Starting kubelet.service... Dec 13 02:21:33.918336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2267790915.mount: Deactivated successfully. Dec 13 02:21:34.149254 systemd[1]: Started kubelet.service. Dec 13 02:21:34.323353 kubelet[2166]: E1213 02:21:34.323230 2166 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:21:34.329181 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:21:34.329397 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:21:36.298183 env[1758]: time="2024-12-13T02:21:36.298128332Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:36.300961 env[1758]: time="2024-12-13T02:21:36.300916530Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:36.302956 env[1758]: time="2024-12-13T02:21:36.302910969Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:36.305911 env[1758]: time="2024-12-13T02:21:36.305853696Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:36.306826 env[1758]: time="2024-12-13T02:21:36.306784651Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 02:21:36.318524 env[1758]: time="2024-12-13T02:21:36.318481934Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 02:21:39.130274 env[1758]: time="2024-12-13T02:21:39.130018909Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:39.133136 env[1758]: time="2024-12-13T02:21:39.133089766Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:39.135751 env[1758]: time="2024-12-13T02:21:39.135710051Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:39.138982 env[1758]: time="2024-12-13T02:21:39.138905114Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:39.140517 env[1758]: time="2024-12-13T02:21:39.140476639Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 02:21:39.153158 env[1758]: time="2024-12-13T02:21:39.153116084Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 02:21:40.758762 env[1758]: time="2024-12-13T02:21:40.758708050Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:40.762812 env[1758]: time="2024-12-13T02:21:40.762768374Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:40.765152 env[1758]: time="2024-12-13T02:21:40.765113804Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:40.767351 env[1758]: time="2024-12-13T02:21:40.767309439Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:40.768159 env[1758]: time="2024-12-13T02:21:40.768122008Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 02:21:40.783432 env[1758]: time="2024-12-13T02:21:40.783370446Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:21:42.055648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount667060945.mount: Deactivated successfully. Dec 13 02:21:42.972767 env[1758]: time="2024-12-13T02:21:42.972583974Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:42.975084 env[1758]: time="2024-12-13T02:21:42.975039290Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:42.977844 env[1758]: time="2024-12-13T02:21:42.977771411Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:42.981888 env[1758]: time="2024-12-13T02:21:42.981838700Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:42.982596 env[1758]: time="2024-12-13T02:21:42.982449794Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:21:43.008907 env[1758]: time="2024-12-13T02:21:43.008517717Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:21:43.594950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2057876764.mount: Deactivated successfully. Dec 13 02:21:44.537880 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:21:44.538127 systemd[1]: Stopped kubelet.service. Dec 13 02:21:44.541152 systemd[1]: Starting kubelet.service... Dec 13 02:21:44.858454 systemd[1]: Started kubelet.service. Dec 13 02:21:44.988903 kubelet[2202]: E1213 02:21:44.988848 2202 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:21:44.991733 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:21:44.992050 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:21:45.076562 env[1758]: time="2024-12-13T02:21:45.076500686Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:45.080273 env[1758]: time="2024-12-13T02:21:45.080201720Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:45.084447 env[1758]: time="2024-12-13T02:21:45.084402042Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:45.087227 env[1758]: time="2024-12-13T02:21:45.087150685Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:45.088894 env[1758]: time="2024-12-13T02:21:45.088847727Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:21:45.104301 env[1758]: time="2024-12-13T02:21:45.104242185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:21:45.620837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4009881920.mount: Deactivated successfully. Dec 13 02:21:45.626730 env[1758]: time="2024-12-13T02:21:45.626676106Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:45.628848 env[1758]: time="2024-12-13T02:21:45.628728585Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:45.632505 env[1758]: time="2024-12-13T02:21:45.632402167Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:45.634668 env[1758]: time="2024-12-13T02:21:45.634626655Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:45.636259 env[1758]: time="2024-12-13T02:21:45.636220063Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 02:21:45.651462 env[1758]: time="2024-12-13T02:21:45.651364444Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 02:21:46.209740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2766156922.mount: Deactivated successfully. Dec 13 02:21:48.267860 amazon-ssm-agent[1734]: 2024-12-13 02:21:48 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 02:21:49.361751 env[1758]: time="2024-12-13T02:21:49.361695238Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:49.365346 env[1758]: time="2024-12-13T02:21:49.365298171Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:49.369132 env[1758]: time="2024-12-13T02:21:49.369086037Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:49.372659 env[1758]: time="2024-12-13T02:21:49.372615317Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:49.373499 env[1758]: time="2024-12-13T02:21:49.373457007Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 02:21:50.372530 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 02:21:53.661881 systemd[1]: Stopped kubelet.service. Dec 13 02:21:53.665219 systemd[1]: Starting kubelet.service... Dec 13 02:21:53.696101 systemd[1]: Reloading. Dec 13 02:21:53.908271 /usr/lib/systemd/system-generators/torcx-generator[2309]: time="2024-12-13T02:21:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:21:53.908938 /usr/lib/systemd/system-generators/torcx-generator[2309]: time="2024-12-13T02:21:53Z" level=info msg="torcx already run" Dec 13 02:21:54.093165 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:21:54.093190 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:21:54.122339 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:21:54.301140 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 02:21:54.301279 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 02:21:54.303321 systemd[1]: Stopped kubelet.service. Dec 13 02:21:54.306983 systemd[1]: Starting kubelet.service... Dec 13 02:21:55.048720 systemd[1]: Started kubelet.service. Dec 13 02:21:55.174371 kubelet[2376]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:21:55.174371 kubelet[2376]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:21:55.174371 kubelet[2376]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:21:55.174938 kubelet[2376]: I1213 02:21:55.174436 2376 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:21:55.655044 kubelet[2376]: I1213 02:21:55.655003 2376 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:21:55.655044 kubelet[2376]: I1213 02:21:55.655035 2376 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:21:55.655326 kubelet[2376]: I1213 02:21:55.655304 2376 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:21:55.728493 kubelet[2376]: E1213 02:21:55.728457 2376 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.169:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:55.731099 kubelet[2376]: I1213 02:21:55.731062 2376 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:21:55.755283 kubelet[2376]: I1213 02:21:55.755245 2376 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:21:55.755854 kubelet[2376]: I1213 02:21:55.755831 2376 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:21:55.756065 kubelet[2376]: I1213 02:21:55.756043 2376 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:21:55.756203 kubelet[2376]: I1213 02:21:55.756078 2376 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:21:55.756203 kubelet[2376]: I1213 02:21:55.756092 2376 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:21:55.756299 kubelet[2376]: I1213 02:21:55.756238 2376 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:21:55.756389 kubelet[2376]: I1213 02:21:55.756370 2376 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:21:55.756443 kubelet[2376]: I1213 02:21:55.756394 2376 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:21:55.756823 kubelet[2376]: I1213 02:21:55.756804 2376 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:21:55.756902 kubelet[2376]: I1213 02:21:55.756830 2376 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:21:55.760630 kubelet[2376]: W1213 02:21:55.757017 2376 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.30.169:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-169&limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:55.761084 kubelet[2376]: E1213 02:21:55.761064 2376 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.169:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-169&limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:55.763024 kubelet[2376]: I1213 02:21:55.763001 2376 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:21:55.772235 kubelet[2376]: W1213 02:21:55.772173 2376 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.30.169:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:55.772440 kubelet[2376]: E1213 02:21:55.772427 2376 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.169:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:55.774403 kubelet[2376]: I1213 02:21:55.774361 2376 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:21:55.776411 kubelet[2376]: W1213 02:21:55.776379 2376 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:21:55.777048 kubelet[2376]: I1213 02:21:55.777026 2376 server.go:1256] "Started kubelet" Dec 13 02:21:55.777218 kubelet[2376]: I1213 02:21:55.777193 2376 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:21:55.778716 kubelet[2376]: I1213 02:21:55.778028 2376 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:21:55.782242 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:21:55.782359 kubelet[2376]: I1213 02:21:55.782277 2376 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:21:55.784520 kubelet[2376]: I1213 02:21:55.784494 2376 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:21:55.784917 kubelet[2376]: I1213 02:21:55.784898 2376 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:21:55.787838 kubelet[2376]: E1213 02:21:55.787809 2376 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.169:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.169:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-169.18109b36b356e6fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-169,UID:ip-172-31-30-169,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-169,},FirstTimestamp:2024-12-13 02:21:55.776997116 +0000 UTC m=+0.693149762,LastTimestamp:2024-12-13 02:21:55.776997116 +0000 UTC m=+0.693149762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-169,}" Dec 13 02:21:55.792336 kubelet[2376]: I1213 02:21:55.792309 2376 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:21:55.794378 kubelet[2376]: E1213 02:21:55.794343 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-169?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" interval="200ms" Dec 13 02:21:55.794721 kubelet[2376]: I1213 02:21:55.794696 2376 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:21:55.798778 kubelet[2376]: I1213 02:21:55.798752 2376 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:21:55.799022 kubelet[2376]: I1213 02:21:55.799008 2376 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:21:55.799556 kubelet[2376]: I1213 02:21:55.799528 2376 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:21:55.799556 kubelet[2376]: I1213 02:21:55.799558 2376 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:21:55.813255 kubelet[2376]: E1213 02:21:55.813220 2376 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:21:55.836457 kubelet[2376]: I1213 02:21:55.836423 2376 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:21:55.838065 kubelet[2376]: W1213 02:21:55.838013 2376 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.30.169:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:55.838211 kubelet[2376]: E1213 02:21:55.838080 2376 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.169:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:55.841193 kubelet[2376]: I1213 02:21:55.841171 2376 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:21:55.841627 kubelet[2376]: I1213 02:21:55.841599 2376 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:21:55.841809 kubelet[2376]: I1213 02:21:55.841791 2376 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:21:55.841885 kubelet[2376]: E1213 02:21:55.841874 2376 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:21:55.842843 kubelet[2376]: I1213 02:21:55.842824 2376 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:21:55.842843 kubelet[2376]: I1213 02:21:55.842844 2376 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:21:55.842975 kubelet[2376]: I1213 02:21:55.842862 2376 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:21:55.850570 kubelet[2376]: W1213 02:21:55.850519 2376 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.30.169:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:55.850797 kubelet[2376]: E1213 02:21:55.850773 2376 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.169:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:55.862533 kubelet[2376]: I1213 02:21:55.862491 2376 policy_none.go:49] "None policy: Start" Dec 13 02:21:55.863343 kubelet[2376]: I1213 02:21:55.863317 2376 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:21:55.863468 kubelet[2376]: I1213 02:21:55.863351 2376 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:21:55.884522 kubelet[2376]: I1213 02:21:55.884483 2376 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:21:55.884881 kubelet[2376]: I1213 02:21:55.884850 2376 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:21:55.888226 kubelet[2376]: E1213 02:21:55.888200 2376 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-30-169\" not found" Dec 13 02:21:55.895562 kubelet[2376]: I1213 02:21:55.895523 2376 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-169" Dec 13 02:21:55.896064 kubelet[2376]: E1213 02:21:55.896042 2376 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.169:6443/api/v1/nodes\": dial tcp 172.31.30.169:6443: connect: connection refused" node="ip-172-31-30-169" Dec 13 02:21:55.945189 kubelet[2376]: I1213 02:21:55.944154 2376 topology_manager.go:215] "Topology Admit Handler" podUID="99f5b4f4e7df906f3689236bb2790cdf" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-169" Dec 13 02:21:55.955900 kubelet[2376]: I1213 02:21:55.955869 2376 topology_manager.go:215] "Topology Admit Handler" podUID="64d35d1427c6990de2c39d1ad2442346" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-169" Dec 13 02:21:55.961024 kubelet[2376]: I1213 02:21:55.960992 2376 topology_manager.go:215] "Topology Admit Handler" podUID="8c3902d1ff9ad7ad5173bb3ac933f25e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-169" Dec 13 02:21:56.004982 kubelet[2376]: E1213 02:21:56.004950 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-169?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" interval="400ms" Dec 13 02:21:56.098569 kubelet[2376]: I1213 02:21:56.098530 2376 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-169" Dec 13 02:21:56.098937 kubelet[2376]: E1213 02:21:56.098914 2376 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.169:6443/api/v1/nodes\": dial tcp 172.31.30.169:6443: connect: connection refused" node="ip-172-31-30-169" Dec 13 02:21:56.105222 kubelet[2376]: I1213 02:21:56.105189 2376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/64d35d1427c6990de2c39d1ad2442346-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-169\" (UID: \"64d35d1427c6990de2c39d1ad2442346\") " pod="kube-system/kube-controller-manager-ip-172-31-30-169" Dec 13 02:21:56.105461 kubelet[2376]: I1213 02:21:56.105244 2376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64d35d1427c6990de2c39d1ad2442346-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-169\" (UID: \"64d35d1427c6990de2c39d1ad2442346\") " pod="kube-system/kube-controller-manager-ip-172-31-30-169" Dec 13 02:21:56.105461 kubelet[2376]: I1213 02:21:56.105283 2376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c3902d1ff9ad7ad5173bb3ac933f25e-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-169\" (UID: \"8c3902d1ff9ad7ad5173bb3ac933f25e\") " pod="kube-system/kube-scheduler-ip-172-31-30-169" Dec 13 02:21:56.105461 kubelet[2376]: I1213 02:21:56.105311 2376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99f5b4f4e7df906f3689236bb2790cdf-ca-certs\") pod \"kube-apiserver-ip-172-31-30-169\" (UID: \"99f5b4f4e7df906f3689236bb2790cdf\") " pod="kube-system/kube-apiserver-ip-172-31-30-169" Dec 13 02:21:56.105461 kubelet[2376]: I1213 02:21:56.105397 2376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99f5b4f4e7df906f3689236bb2790cdf-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-169\" (UID: \"99f5b4f4e7df906f3689236bb2790cdf\") " pod="kube-system/kube-apiserver-ip-172-31-30-169" Dec 13 02:21:56.105461 kubelet[2376]: I1213 02:21:56.105429 2376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64d35d1427c6990de2c39d1ad2442346-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-169\" (UID: \"64d35d1427c6990de2c39d1ad2442346\") " pod="kube-system/kube-controller-manager-ip-172-31-30-169" Dec 13 02:21:56.105709 kubelet[2376]: I1213 02:21:56.105457 2376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99f5b4f4e7df906f3689236bb2790cdf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-169\" (UID: \"99f5b4f4e7df906f3689236bb2790cdf\") " pod="kube-system/kube-apiserver-ip-172-31-30-169" Dec 13 02:21:56.105709 kubelet[2376]: I1213 02:21:56.105504 2376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64d35d1427c6990de2c39d1ad2442346-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-169\" (UID: \"64d35d1427c6990de2c39d1ad2442346\") " pod="kube-system/kube-controller-manager-ip-172-31-30-169" Dec 13 02:21:56.105709 kubelet[2376]: I1213 02:21:56.105536 2376 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64d35d1427c6990de2c39d1ad2442346-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-169\" (UID: \"64d35d1427c6990de2c39d1ad2442346\") " pod="kube-system/kube-controller-manager-ip-172-31-30-169" Dec 13 02:21:56.284610 env[1758]: time="2024-12-13T02:21:56.283931797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-169,Uid:99f5b4f4e7df906f3689236bb2790cdf,Namespace:kube-system,Attempt:0,}" Dec 13 02:21:56.292104 env[1758]: time="2024-12-13T02:21:56.292048153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-169,Uid:64d35d1427c6990de2c39d1ad2442346,Namespace:kube-system,Attempt:0,}" Dec 13 02:21:56.292653 env[1758]: time="2024-12-13T02:21:56.292616189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-169,Uid:8c3902d1ff9ad7ad5173bb3ac933f25e,Namespace:kube-system,Attempt:0,}" Dec 13 02:21:56.405895 kubelet[2376]: E1213 02:21:56.405856 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-169?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" interval="800ms" Dec 13 02:21:56.501363 kubelet[2376]: I1213 02:21:56.501333 2376 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-169" Dec 13 02:21:56.502012 kubelet[2376]: E1213 02:21:56.501981 2376 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.169:6443/api/v1/nodes\": dial tcp 172.31.30.169:6443: connect: connection refused" node="ip-172-31-30-169" Dec 13 02:21:56.675458 kubelet[2376]: W1213 02:21:56.675410 2376 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.30.169:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:56.675458 kubelet[2376]: E1213 02:21:56.675455 2376 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.169:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:56.816952 kubelet[2376]: W1213 02:21:56.814844 2376 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.30.169:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:56.816952 kubelet[2376]: E1213 02:21:56.814900 2376 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.169:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:56.825629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3733124325.mount: Deactivated successfully. Dec 13 02:21:56.853504 env[1758]: time="2024-12-13T02:21:56.853460010Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:56.856056 env[1758]: time="2024-12-13T02:21:56.856012126Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:56.863293 env[1758]: time="2024-12-13T02:21:56.862824092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:56.867843 env[1758]: time="2024-12-13T02:21:56.867792440Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:56.872812 env[1758]: time="2024-12-13T02:21:56.872760864Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:56.878031 env[1758]: time="2024-12-13T02:21:56.877980520Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:56.882788 env[1758]: time="2024-12-13T02:21:56.881035111Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:56.885200 env[1758]: time="2024-12-13T02:21:56.885156750Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:56.895170 env[1758]: time="2024-12-13T02:21:56.895122945Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:56.900598 env[1758]: time="2024-12-13T02:21:56.900536636Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:56.902095 env[1758]: time="2024-12-13T02:21:56.902049598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:56.902933 env[1758]: time="2024-12-13T02:21:56.902898690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:21:56.915052 kubelet[2376]: W1213 02:21:56.914938 2376 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.30.169:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-169&limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:56.915052 kubelet[2376]: E1213 02:21:56.915027 2376 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.169:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-169&limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:56.968567 env[1758]: time="2024-12-13T02:21:56.967632793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:21:56.968847 env[1758]: time="2024-12-13T02:21:56.967695672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:21:56.968996 env[1758]: time="2024-12-13T02:21:56.968954229Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:21:56.970043 env[1758]: time="2024-12-13T02:21:56.969903605Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f27cb3fd124d51a2ac44387a3f50d9a8973920ea0f672e8bbb5ad151c9d3495 pid=2423 runtime=io.containerd.runc.v2 Dec 13 02:21:56.971019 env[1758]: time="2024-12-13T02:21:56.970675092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:21:56.971019 env[1758]: time="2024-12-13T02:21:56.970777451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:21:56.971019 env[1758]: time="2024-12-13T02:21:56.970798481Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:21:56.971328 env[1758]: time="2024-12-13T02:21:56.971087657Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/66ed44ab10d4d6d7bf90fbe0619e62e72f5ec8a4265041b9b0aaf08cfa1e911a pid=2428 runtime=io.containerd.runc.v2 Dec 13 02:21:56.989249 env[1758]: time="2024-12-13T02:21:56.989154678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:21:56.989249 env[1758]: time="2024-12-13T02:21:56.989203795Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:21:56.989501 env[1758]: time="2024-12-13T02:21:56.989238466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:21:56.989501 env[1758]: time="2024-12-13T02:21:56.989465462Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1c2cc5c6a75ce6243a6c8a6d870950b521bb08743d8db14a5b47ee0ce7db6b1b pid=2450 runtime=io.containerd.runc.v2 Dec 13 02:21:57.171718 env[1758]: time="2024-12-13T02:21:57.171668873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-30-169,Uid:64d35d1427c6990de2c39d1ad2442346,Namespace:kube-system,Attempt:0,} returns sandbox id \"1c2cc5c6a75ce6243a6c8a6d870950b521bb08743d8db14a5b47ee0ce7db6b1b\"" Dec 13 02:21:57.172082 env[1758]: time="2024-12-13T02:21:57.171908840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-30-169,Uid:99f5b4f4e7df906f3689236bb2790cdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f27cb3fd124d51a2ac44387a3f50d9a8973920ea0f672e8bbb5ad151c9d3495\"" Dec 13 02:21:57.178246 env[1758]: time="2024-12-13T02:21:57.178197419Z" level=info msg="CreateContainer within sandbox \"1c2cc5c6a75ce6243a6c8a6d870950b521bb08743d8db14a5b47ee0ce7db6b1b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:21:57.184796 env[1758]: time="2024-12-13T02:21:57.183163822Z" level=info msg="CreateContainer within sandbox \"2f27cb3fd124d51a2ac44387a3f50d9a8973920ea0f672e8bbb5ad151c9d3495\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:21:57.208699 kubelet[2376]: E1213 02:21:57.208652 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-169?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" interval="1.6s" Dec 13 02:21:57.212385 env[1758]: time="2024-12-13T02:21:57.212349821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-30-169,Uid:8c3902d1ff9ad7ad5173bb3ac933f25e,Namespace:kube-system,Attempt:0,} returns sandbox id \"66ed44ab10d4d6d7bf90fbe0619e62e72f5ec8a4265041b9b0aaf08cfa1e911a\"" Dec 13 02:21:57.217922 env[1758]: time="2024-12-13T02:21:57.217884470Z" level=info msg="CreateContainer within sandbox \"66ed44ab10d4d6d7bf90fbe0619e62e72f5ec8a4265041b9b0aaf08cfa1e911a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:21:57.248596 env[1758]: time="2024-12-13T02:21:57.247581036Z" level=info msg="CreateContainer within sandbox \"2f27cb3fd124d51a2ac44387a3f50d9a8973920ea0f672e8bbb5ad151c9d3495\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"aad8ddeb3cae955f713f371b00f50512b7e4e9e66ef07f482dc94c0a149f7037\"" Dec 13 02:21:57.249035 env[1758]: time="2024-12-13T02:21:57.248998133Z" level=info msg="StartContainer for \"aad8ddeb3cae955f713f371b00f50512b7e4e9e66ef07f482dc94c0a149f7037\"" Dec 13 02:21:57.258154 kubelet[2376]: W1213 02:21:57.258074 2376 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.30.169:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:57.258454 kubelet[2376]: E1213 02:21:57.258165 2376 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.169:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:57.261213 env[1758]: time="2024-12-13T02:21:57.261178084Z" level=info msg="CreateContainer within sandbox \"1c2cc5c6a75ce6243a6c8a6d870950b521bb08743d8db14a5b47ee0ce7db6b1b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c3a92e7aa3397f3f92b9b6cbc5a60ad6ea0495eb169022171c2ef870f1730900\"" Dec 13 02:21:57.262510 env[1758]: time="2024-12-13T02:21:57.262371777Z" level=info msg="StartContainer for \"c3a92e7aa3397f3f92b9b6cbc5a60ad6ea0495eb169022171c2ef870f1730900\"" Dec 13 02:21:57.269826 env[1758]: time="2024-12-13T02:21:57.269607529Z" level=info msg="CreateContainer within sandbox \"66ed44ab10d4d6d7bf90fbe0619e62e72f5ec8a4265041b9b0aaf08cfa1e911a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fa4779d49b6e2e3c74747e39fd9e4ee80ca2affd5338755750c6f7ea024b8dd1\"" Dec 13 02:21:57.270501 env[1758]: time="2024-12-13T02:21:57.270466402Z" level=info msg="StartContainer for \"fa4779d49b6e2e3c74747e39fd9e4ee80ca2affd5338755750c6f7ea024b8dd1\"" Dec 13 02:21:57.308739 kubelet[2376]: I1213 02:21:57.308184 2376 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-169" Dec 13 02:21:57.308950 kubelet[2376]: E1213 02:21:57.308818 2376 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.169:6443/api/v1/nodes\": dial tcp 172.31.30.169:6443: connect: connection refused" node="ip-172-31-30-169" Dec 13 02:21:57.519437 env[1758]: time="2024-12-13T02:21:57.517710021Z" level=info msg="StartContainer for \"aad8ddeb3cae955f713f371b00f50512b7e4e9e66ef07f482dc94c0a149f7037\" returns successfully" Dec 13 02:21:57.540874 env[1758]: time="2024-12-13T02:21:57.539444350Z" level=info msg="StartContainer for \"fa4779d49b6e2e3c74747e39fd9e4ee80ca2affd5338755750c6f7ea024b8dd1\" returns successfully" Dec 13 02:21:57.540874 env[1758]: time="2024-12-13T02:21:57.540592301Z" level=info msg="StartContainer for \"c3a92e7aa3397f3f92b9b6cbc5a60ad6ea0495eb169022171c2ef870f1730900\" returns successfully" Dec 13 02:21:57.755276 kubelet[2376]: E1213 02:21:57.755242 2376 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.30.169:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:58.466216 kubelet[2376]: E1213 02:21:58.466175 2376 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.30.169:6443/api/v1/namespaces/default/events\": dial tcp 172.31.30.169:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-30-169.18109b36b356e6fc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-30-169,UID:ip-172-31-30-169,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-30-169,},FirstTimestamp:2024-12-13 02:21:55.776997116 +0000 UTC m=+0.693149762,LastTimestamp:2024-12-13 02:21:55.776997116 +0000 UTC m=+0.693149762,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-30-169,}" Dec 13 02:21:58.810707 kubelet[2376]: E1213 02:21:58.810621 2376 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.30.169:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-30-169?timeout=10s\": dial tcp 172.31.30.169:6443: connect: connection refused" interval="3.2s" Dec 13 02:21:58.900948 kubelet[2376]: W1213 02:21:58.900830 2376 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://172.31.30.169:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:58.901576 kubelet[2376]: E1213 02:21:58.900959 2376 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.30.169:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:58.911062 kubelet[2376]: I1213 02:21:58.911032 2376 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-169" Dec 13 02:21:58.911608 kubelet[2376]: E1213 02:21:58.911583 2376 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.30.169:6443/api/v1/nodes\": dial tcp 172.31.30.169:6443: connect: connection refused" node="ip-172-31-30-169" Dec 13 02:21:59.539198 kubelet[2376]: W1213 02:21:59.539080 2376 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://172.31.30.169:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:59.539198 kubelet[2376]: E1213 02:21:59.539156 2376 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.30.169:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:59.679881 kubelet[2376]: W1213 02:21:59.679759 2376 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://172.31.30.169:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-169&limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:59.679881 kubelet[2376]: E1213 02:21:59.679853 2376 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.30.169:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-30-169&limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:59.778960 kubelet[2376]: W1213 02:21:59.778778 2376 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://172.31.30.169:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:21:59.778960 kubelet[2376]: E1213 02:21:59.778890 2376 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.30.169:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.30.169:6443: connect: connection refused Dec 13 02:22:02.126825 kubelet[2376]: I1213 02:22:02.126791 2376 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-169" Dec 13 02:22:02.817328 kubelet[2376]: E1213 02:22:02.817290 2376 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-30-169\" not found" node="ip-172-31-30-169" Dec 13 02:22:02.905924 kubelet[2376]: I1213 02:22:02.905885 2376 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-169" Dec 13 02:22:02.935738 kubelet[2376]: E1213 02:22:02.935702 2376 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-30-169\" not found" Dec 13 02:22:03.037131 kubelet[2376]: E1213 02:22:03.037099 2376 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-30-169\" not found" Dec 13 02:22:03.137735 kubelet[2376]: E1213 02:22:03.137621 2376 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-30-169\" not found" Dec 13 02:22:03.238660 kubelet[2376]: E1213 02:22:03.238614 2376 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-30-169\" not found" Dec 13 02:22:03.339250 kubelet[2376]: E1213 02:22:03.339203 2376 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-30-169\" not found" Dec 13 02:22:03.439998 kubelet[2376]: E1213 02:22:03.439856 2376 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-30-169\" not found" Dec 13 02:22:03.540653 kubelet[2376]: E1213 02:22:03.540477 2376 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-30-169\" not found" Dec 13 02:22:03.641250 kubelet[2376]: E1213 02:22:03.641203 2376 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-30-169\" not found" Dec 13 02:22:03.742063 kubelet[2376]: E1213 02:22:03.741955 2376 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-30-169\" not found" Dec 13 02:22:03.844276 kubelet[2376]: E1213 02:22:03.844237 2376 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ip-172-31-30-169\" not found" Dec 13 02:22:04.770298 kubelet[2376]: I1213 02:22:04.770253 2376 apiserver.go:52] "Watching apiserver" Dec 13 02:22:04.796024 update_engine[1750]: I1213 02:22:04.795975 1750 update_attempter.cc:509] Updating boot flags... Dec 13 02:22:04.799854 kubelet[2376]: I1213 02:22:04.799747 2376 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:22:06.376762 systemd[1]: Reloading. Dec 13 02:22:06.531448 /usr/lib/systemd/system-generators/torcx-generator[2766]: time="2024-12-13T02:22:06Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:22:06.535612 /usr/lib/systemd/system-generators/torcx-generator[2766]: time="2024-12-13T02:22:06Z" level=info msg="torcx already run" Dec 13 02:22:06.666275 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:22:06.666297 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:22:06.691826 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:22:06.840046 systemd[1]: Stopping kubelet.service... Dec 13 02:22:06.840769 kubelet[2376]: I1213 02:22:06.840691 2376 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:22:06.854904 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:22:06.855314 systemd[1]: Stopped kubelet.service. Dec 13 02:22:06.859943 systemd[1]: Starting kubelet.service... Dec 13 02:22:08.109929 systemd[1]: Started kubelet.service. Dec 13 02:22:08.309352 sudo[2844]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 02:22:08.309755 sudo[2844]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 02:22:08.323477 kubelet[2833]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:22:08.323477 kubelet[2833]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:22:08.323477 kubelet[2833]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:22:08.324203 kubelet[2833]: I1213 02:22:08.323567 2833 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:22:08.332193 kubelet[2833]: I1213 02:22:08.332157 2833 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:22:08.332193 kubelet[2833]: I1213 02:22:08.332186 2833 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:22:08.332762 kubelet[2833]: I1213 02:22:08.332741 2833 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:22:08.340571 kubelet[2833]: I1213 02:22:08.337502 2833 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:22:08.349453 kubelet[2833]: I1213 02:22:08.349388 2833 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:22:08.359471 kubelet[2833]: I1213 02:22:08.359135 2833 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:22:08.360568 kubelet[2833]: I1213 02:22:08.359964 2833 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:22:08.360568 kubelet[2833]: I1213 02:22:08.360310 2833 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:22:08.360568 kubelet[2833]: I1213 02:22:08.360342 2833 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:22:08.360568 kubelet[2833]: I1213 02:22:08.360355 2833 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:22:08.360568 kubelet[2833]: I1213 02:22:08.360407 2833 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:22:08.360568 kubelet[2833]: I1213 02:22:08.360514 2833 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:22:08.361061 kubelet[2833]: I1213 02:22:08.360531 2833 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:22:08.361061 kubelet[2833]: I1213 02:22:08.360584 2833 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:22:08.361061 kubelet[2833]: I1213 02:22:08.360604 2833 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:22:08.378181 kubelet[2833]: I1213 02:22:08.378086 2833 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:22:08.378649 kubelet[2833]: I1213 02:22:08.378626 2833 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:22:08.379961 kubelet[2833]: I1213 02:22:08.379943 2833 server.go:1256] "Started kubelet" Dec 13 02:22:08.390617 kubelet[2833]: I1213 02:22:08.390571 2833 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:22:08.400704 kubelet[2833]: I1213 02:22:08.400671 2833 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:22:08.404213 kubelet[2833]: I1213 02:22:08.404171 2833 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:22:08.404886 kubelet[2833]: I1213 02:22:08.404861 2833 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:22:08.406463 kubelet[2833]: I1213 02:22:08.406434 2833 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:22:08.411184 kubelet[2833]: I1213 02:22:08.410698 2833 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:22:08.411184 kubelet[2833]: I1213 02:22:08.410941 2833 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:22:08.432160 kubelet[2833]: I1213 02:22:08.403081 2833 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:22:08.456943 kubelet[2833]: I1213 02:22:08.456910 2833 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:22:08.457300 kubelet[2833]: I1213 02:22:08.457234 2833 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:22:08.480551 kubelet[2833]: E1213 02:22:08.480513 2833 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:22:08.484677 kubelet[2833]: I1213 02:22:08.484653 2833 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:22:08.511462 kubelet[2833]: E1213 02:22:08.511429 2833 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Dec 13 02:22:08.518981 kubelet[2833]: I1213 02:22:08.518717 2833 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-30-169" Dec 13 02:22:08.529590 kubelet[2833]: I1213 02:22:08.528993 2833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:22:08.552994 kubelet[2833]: I1213 02:22:08.531841 2833 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-30-169" Dec 13 02:22:08.552994 kubelet[2833]: I1213 02:22:08.531917 2833 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-30-169" Dec 13 02:22:08.552994 kubelet[2833]: I1213 02:22:08.532299 2833 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:22:08.552994 kubelet[2833]: I1213 02:22:08.532323 2833 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:22:08.552994 kubelet[2833]: I1213 02:22:08.532343 2833 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:22:08.552994 kubelet[2833]: E1213 02:22:08.532395 2833 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:22:08.632697 kubelet[2833]: E1213 02:22:08.632612 2833 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 02:22:08.762487 kubelet[2833]: I1213 02:22:08.762460 2833 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:22:08.764215 kubelet[2833]: I1213 02:22:08.762897 2833 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:22:08.764215 kubelet[2833]: I1213 02:22:08.762925 2833 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:22:08.764215 kubelet[2833]: I1213 02:22:08.763250 2833 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:22:08.764215 kubelet[2833]: I1213 02:22:08.763336 2833 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:22:08.764215 kubelet[2833]: I1213 02:22:08.763608 2833 policy_none.go:49] "None policy: Start" Dec 13 02:22:08.765645 kubelet[2833]: I1213 02:22:08.765628 2833 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:22:08.766829 kubelet[2833]: I1213 02:22:08.765786 2833 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:22:08.766829 kubelet[2833]: I1213 02:22:08.766035 2833 state_mem.go:75] "Updated machine memory state" Dec 13 02:22:08.768325 kubelet[2833]: I1213 02:22:08.768308 2833 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:22:08.779157 kubelet[2833]: I1213 02:22:08.779084 2833 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:22:08.833421 kubelet[2833]: I1213 02:22:08.833384 2833 topology_manager.go:215] "Topology Admit Handler" podUID="99f5b4f4e7df906f3689236bb2790cdf" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-30-169" Dec 13 02:22:08.833920 kubelet[2833]: I1213 02:22:08.833887 2833 topology_manager.go:215] "Topology Admit Handler" podUID="64d35d1427c6990de2c39d1ad2442346" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-30-169" Dec 13 02:22:08.834459 kubelet[2833]: I1213 02:22:08.834438 2833 topology_manager.go:215] "Topology Admit Handler" podUID="8c3902d1ff9ad7ad5173bb3ac933f25e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-30-169" Dec 13 02:22:08.918513 kubelet[2833]: I1213 02:22:08.918406 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64d35d1427c6990de2c39d1ad2442346-ca-certs\") pod \"kube-controller-manager-ip-172-31-30-169\" (UID: \"64d35d1427c6990de2c39d1ad2442346\") " pod="kube-system/kube-controller-manager-ip-172-31-30-169" Dec 13 02:22:08.918513 kubelet[2833]: I1213 02:22:08.918462 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/64d35d1427c6990de2c39d1ad2442346-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-30-169\" (UID: \"64d35d1427c6990de2c39d1ad2442346\") " pod="kube-system/kube-controller-manager-ip-172-31-30-169" Dec 13 02:22:08.918513 kubelet[2833]: I1213 02:22:08.918491 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64d35d1427c6990de2c39d1ad2442346-k8s-certs\") pod \"kube-controller-manager-ip-172-31-30-169\" (UID: \"64d35d1427c6990de2c39d1ad2442346\") " pod="kube-system/kube-controller-manager-ip-172-31-30-169" Dec 13 02:22:08.918774 kubelet[2833]: I1213 02:22:08.918523 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64d35d1427c6990de2c39d1ad2442346-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-30-169\" (UID: \"64d35d1427c6990de2c39d1ad2442346\") " pod="kube-system/kube-controller-manager-ip-172-31-30-169" Dec 13 02:22:08.918774 kubelet[2833]: I1213 02:22:08.918562 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/99f5b4f4e7df906f3689236bb2790cdf-ca-certs\") pod \"kube-apiserver-ip-172-31-30-169\" (UID: \"99f5b4f4e7df906f3689236bb2790cdf\") " pod="kube-system/kube-apiserver-ip-172-31-30-169" Dec 13 02:22:08.918774 kubelet[2833]: I1213 02:22:08.918590 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/99f5b4f4e7df906f3689236bb2790cdf-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-30-169\" (UID: \"99f5b4f4e7df906f3689236bb2790cdf\") " pod="kube-system/kube-apiserver-ip-172-31-30-169" Dec 13 02:22:08.918774 kubelet[2833]: I1213 02:22:08.918614 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64d35d1427c6990de2c39d1ad2442346-kubeconfig\") pod \"kube-controller-manager-ip-172-31-30-169\" (UID: \"64d35d1427c6990de2c39d1ad2442346\") " pod="kube-system/kube-controller-manager-ip-172-31-30-169" Dec 13 02:22:08.918774 kubelet[2833]: I1213 02:22:08.918647 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c3902d1ff9ad7ad5173bb3ac933f25e-kubeconfig\") pod \"kube-scheduler-ip-172-31-30-169\" (UID: \"8c3902d1ff9ad7ad5173bb3ac933f25e\") " pod="kube-system/kube-scheduler-ip-172-31-30-169" Dec 13 02:22:08.919002 kubelet[2833]: I1213 02:22:08.918674 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/99f5b4f4e7df906f3689236bb2790cdf-k8s-certs\") pod \"kube-apiserver-ip-172-31-30-169\" (UID: \"99f5b4f4e7df906f3689236bb2790cdf\") " pod="kube-system/kube-apiserver-ip-172-31-30-169" Dec 13 02:22:09.381662 kubelet[2833]: I1213 02:22:09.381621 2833 apiserver.go:52] "Watching apiserver" Dec 13 02:22:09.411362 kubelet[2833]: I1213 02:22:09.411321 2833 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:22:09.454601 kubelet[2833]: I1213 02:22:09.454564 2833 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-30-169" podStartSLOduration=1.454500017 podStartE2EDuration="1.454500017s" podCreationTimestamp="2024-12-13 02:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:22:09.45422036 +0000 UTC m=+1.284194887" watchObservedRunningTime="2024-12-13 02:22:09.454500017 +0000 UTC m=+1.284474543" Dec 13 02:22:09.467815 kubelet[2833]: I1213 02:22:09.467779 2833 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-30-169" podStartSLOduration=1.467727552 podStartE2EDuration="1.467727552s" podCreationTimestamp="2024-12-13 02:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:22:09.467255185 +0000 UTC m=+1.297229708" watchObservedRunningTime="2024-12-13 02:22:09.467727552 +0000 UTC m=+1.297702081" Dec 13 02:22:09.498207 kubelet[2833]: I1213 02:22:09.498178 2833 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-30-169" podStartSLOduration=1.498085524 podStartE2EDuration="1.498085524s" podCreationTimestamp="2024-12-13 02:22:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:22:09.483614997 +0000 UTC m=+1.313589530" watchObservedRunningTime="2024-12-13 02:22:09.498085524 +0000 UTC m=+1.328060053" Dec 13 02:22:09.672145 kubelet[2833]: E1213 02:22:09.671962 2833 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-30-169\" already exists" pod="kube-system/kube-apiserver-ip-172-31-30-169" Dec 13 02:22:09.689962 sudo[2844]: pam_unix(sudo:session): session closed for user root Dec 13 02:22:12.212032 sudo[2019]: pam_unix(sudo:session): session closed for user root Dec 13 02:22:12.235229 sshd[2015]: pam_unix(sshd:session): session closed for user core Dec 13 02:22:12.238452 systemd[1]: sshd@4-172.31.30.169:22-139.178.68.195:50236.service: Deactivated successfully. Dec 13 02:22:12.239863 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:22:12.241226 systemd-logind[1749]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:22:12.243154 systemd-logind[1749]: Removed session 5. Dec 13 02:22:18.297025 amazon-ssm-agent[1734]: 2024-12-13 02:22:18 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 02:22:20.474194 kubelet[2833]: I1213 02:22:20.474161 2833 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:22:20.475449 env[1758]: time="2024-12-13T02:22:20.475403690Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:22:20.477899 kubelet[2833]: I1213 02:22:20.477867 2833 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:22:20.667002 kubelet[2833]: I1213 02:22:20.666959 2833 topology_manager.go:215] "Topology Admit Handler" podUID="483e5960-5ffb-494a-b026-7de47696a7c0" podNamespace="kube-system" podName="cilium-m6jfj" Dec 13 02:22:20.675751 kubelet[2833]: I1213 02:22:20.675716 2833 topology_manager.go:215] "Topology Admit Handler" podUID="4817d4b1-c879-4cd9-8a9f-d5961ff34963" podNamespace="kube-system" podName="kube-proxy-9zcdm" Dec 13 02:22:20.688321 kubelet[2833]: W1213 02:22:20.688283 2833 reflector.go:539] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-30-169" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-169' and this object Dec 13 02:22:20.688590 kubelet[2833]: E1213 02:22:20.688570 2833 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:ip-172-31-30-169" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-30-169' and this object Dec 13 02:22:20.712065 kubelet[2833]: I1213 02:22:20.712016 2833 topology_manager.go:215] "Topology Admit Handler" podUID="e3010c74-ddb2-4a3e-b491-43e90efb9c1d" podNamespace="kube-system" podName="cilium-operator-5cc964979-lxtl9" Dec 13 02:22:20.719871 kubelet[2833]: I1213 02:22:20.719833 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-hostproc\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.720056 kubelet[2833]: I1213 02:22:20.719897 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-cni-path\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.720056 kubelet[2833]: I1213 02:22:20.719928 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4817d4b1-c879-4cd9-8a9f-d5961ff34963-kube-proxy\") pod \"kube-proxy-9zcdm\" (UID: \"4817d4b1-c879-4cd9-8a9f-d5961ff34963\") " pod="kube-system/kube-proxy-9zcdm" Dec 13 02:22:20.720056 kubelet[2833]: I1213 02:22:20.719953 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4817d4b1-c879-4cd9-8a9f-d5961ff34963-lib-modules\") pod \"kube-proxy-9zcdm\" (UID: \"4817d4b1-c879-4cd9-8a9f-d5961ff34963\") " pod="kube-system/kube-proxy-9zcdm" Dec 13 02:22:20.720056 kubelet[2833]: I1213 02:22:20.719982 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-cilium-run\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.720056 kubelet[2833]: I1213 02:22:20.720008 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-xtables-lock\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.720056 kubelet[2833]: I1213 02:22:20.720035 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cj22h\" (UniqueName: \"kubernetes.io/projected/483e5960-5ffb-494a-b026-7de47696a7c0-kube-api-access-cj22h\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.720331 kubelet[2833]: I1213 02:22:20.720063 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-cilium-cgroup\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.720331 kubelet[2833]: I1213 02:22:20.720093 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-lib-modules\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.720331 kubelet[2833]: I1213 02:22:20.720124 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-host-proc-sys-net\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.720331 kubelet[2833]: I1213 02:22:20.720154 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-host-proc-sys-kernel\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.720331 kubelet[2833]: I1213 02:22:20.720184 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-etc-cni-netd\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.720331 kubelet[2833]: I1213 02:22:20.720216 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/483e5960-5ffb-494a-b026-7de47696a7c0-hubble-tls\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.720601 kubelet[2833]: I1213 02:22:20.720251 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4817d4b1-c879-4cd9-8a9f-d5961ff34963-xtables-lock\") pod \"kube-proxy-9zcdm\" (UID: \"4817d4b1-c879-4cd9-8a9f-d5961ff34963\") " pod="kube-system/kube-proxy-9zcdm" Dec 13 02:22:20.720601 kubelet[2833]: I1213 02:22:20.720314 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-bpf-maps\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.720601 kubelet[2833]: I1213 02:22:20.720345 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8pbxs\" (UniqueName: \"kubernetes.io/projected/4817d4b1-c879-4cd9-8a9f-d5961ff34963-kube-api-access-8pbxs\") pod \"kube-proxy-9zcdm\" (UID: \"4817d4b1-c879-4cd9-8a9f-d5961ff34963\") " pod="kube-system/kube-proxy-9zcdm" Dec 13 02:22:20.720601 kubelet[2833]: I1213 02:22:20.720380 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/483e5960-5ffb-494a-b026-7de47696a7c0-clustermesh-secrets\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.720601 kubelet[2833]: I1213 02:22:20.720414 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/483e5960-5ffb-494a-b026-7de47696a7c0-cilium-config-path\") pod \"cilium-m6jfj\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " pod="kube-system/cilium-m6jfj" Dec 13 02:22:20.826331 kubelet[2833]: I1213 02:22:20.826294 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3010c74-ddb2-4a3e-b491-43e90efb9c1d-cilium-config-path\") pod \"cilium-operator-5cc964979-lxtl9\" (UID: \"e3010c74-ddb2-4a3e-b491-43e90efb9c1d\") " pod="kube-system/cilium-operator-5cc964979-lxtl9" Dec 13 02:22:20.826557 kubelet[2833]: I1213 02:22:20.826524 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gl8zk\" (UniqueName: \"kubernetes.io/projected/e3010c74-ddb2-4a3e-b491-43e90efb9c1d-kube-api-access-gl8zk\") pod \"cilium-operator-5cc964979-lxtl9\" (UID: \"e3010c74-ddb2-4a3e-b491-43e90efb9c1d\") " pod="kube-system/cilium-operator-5cc964979-lxtl9" Dec 13 02:22:20.980842 env[1758]: time="2024-12-13T02:22:20.980798814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zcdm,Uid:4817d4b1-c879-4cd9-8a9f-d5961ff34963,Namespace:kube-system,Attempt:0,}" Dec 13 02:22:21.016067 env[1758]: time="2024-12-13T02:22:21.015648326Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-lxtl9,Uid:e3010c74-ddb2-4a3e-b491-43e90efb9c1d,Namespace:kube-system,Attempt:0,}" Dec 13 02:22:21.019501 env[1758]: time="2024-12-13T02:22:21.019414603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:22:21.019501 env[1758]: time="2024-12-13T02:22:21.019465118Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:22:21.019501 env[1758]: time="2024-12-13T02:22:21.019480891Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:22:21.020224 env[1758]: time="2024-12-13T02:22:21.020174977Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f4bb26300dd9e093ddde8fd4e819125471197b8c25b00db18d5fffb08d3984ae pid=2915 runtime=io.containerd.runc.v2 Dec 13 02:22:21.095352 env[1758]: time="2024-12-13T02:22:21.094264829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9zcdm,Uid:4817d4b1-c879-4cd9-8a9f-d5961ff34963,Namespace:kube-system,Attempt:0,} returns sandbox id \"f4bb26300dd9e093ddde8fd4e819125471197b8c25b00db18d5fffb08d3984ae\"" Dec 13 02:22:21.095352 env[1758]: time="2024-12-13T02:22:21.094931953Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:22:21.095352 env[1758]: time="2024-12-13T02:22:21.095012577Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:22:21.095352 env[1758]: time="2024-12-13T02:22:21.095040454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:22:21.095352 env[1758]: time="2024-12-13T02:22:21.095225386Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b pid=2949 runtime=io.containerd.runc.v2 Dec 13 02:22:21.100514 env[1758]: time="2024-12-13T02:22:21.100468056Z" level=info msg="CreateContainer within sandbox \"f4bb26300dd9e093ddde8fd4e819125471197b8c25b00db18d5fffb08d3984ae\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:22:21.158525 env[1758]: time="2024-12-13T02:22:21.156944063Z" level=info msg="CreateContainer within sandbox \"f4bb26300dd9e093ddde8fd4e819125471197b8c25b00db18d5fffb08d3984ae\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"422ac8d073c644a964a5058650639fec32981974ec09cbfcf18358317b65d378\"" Dec 13 02:22:21.158525 env[1758]: time="2024-12-13T02:22:21.157828430Z" level=info msg="StartContainer for \"422ac8d073c644a964a5058650639fec32981974ec09cbfcf18358317b65d378\"" Dec 13 02:22:21.178993 env[1758]: time="2024-12-13T02:22:21.178916138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-lxtl9,Uid:e3010c74-ddb2-4a3e-b491-43e90efb9c1d,Namespace:kube-system,Attempt:0,} returns sandbox id \"135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b\"" Dec 13 02:22:21.182116 env[1758]: time="2024-12-13T02:22:21.182075287Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:22:21.235460 env[1758]: time="2024-12-13T02:22:21.235404688Z" level=info msg="StartContainer for \"422ac8d073c644a964a5058650639fec32981974ec09cbfcf18358317b65d378\" returns successfully" Dec 13 02:22:21.836453 kubelet[2833]: E1213 02:22:21.836398 2833 secret.go:194] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Dec 13 02:22:21.837072 kubelet[2833]: E1213 02:22:21.836532 2833 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/483e5960-5ffb-494a-b026-7de47696a7c0-clustermesh-secrets podName:483e5960-5ffb-494a-b026-7de47696a7c0 nodeName:}" failed. No retries permitted until 2024-12-13 02:22:22.336501609 +0000 UTC m=+14.166476130 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/483e5960-5ffb-494a-b026-7de47696a7c0-clustermesh-secrets") pod "cilium-m6jfj" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0") : failed to sync secret cache: timed out waiting for the condition Dec 13 02:22:22.477196 env[1758]: time="2024-12-13T02:22:22.477140515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6jfj,Uid:483e5960-5ffb-494a-b026-7de47696a7c0,Namespace:kube-system,Attempt:0,}" Dec 13 02:22:22.535091 env[1758]: time="2024-12-13T02:22:22.533660262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:22:22.535835 env[1758]: time="2024-12-13T02:22:22.533751427Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:22:22.535835 env[1758]: time="2024-12-13T02:22:22.535332132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:22:22.536651 env[1758]: time="2024-12-13T02:22:22.536530367Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1 pid=3035 runtime=io.containerd.runc.v2 Dec 13 02:22:22.634700 env[1758]: time="2024-12-13T02:22:22.634640992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-m6jfj,Uid:483e5960-5ffb-494a-b026-7de47696a7c0,Namespace:kube-system,Attempt:0,} returns sandbox id \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\"" Dec 13 02:22:22.875158 systemd[1]: run-containerd-runc-k8s.io-3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1-runc.K5yhfr.mount: Deactivated successfully. Dec 13 02:22:24.035930 env[1758]: time="2024-12-13T02:22:24.035887403Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:24.043917 env[1758]: time="2024-12-13T02:22:24.043015325Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:24.048759 env[1758]: time="2024-12-13T02:22:24.048635198Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:24.049605 env[1758]: time="2024-12-13T02:22:24.049562182Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:22:24.053307 env[1758]: time="2024-12-13T02:22:24.052433336Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:22:24.054889 env[1758]: time="2024-12-13T02:22:24.054746951Z" level=info msg="CreateContainer within sandbox \"135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:22:24.090293 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3731360886.mount: Deactivated successfully. Dec 13 02:22:24.108597 env[1758]: time="2024-12-13T02:22:24.108526601Z" level=info msg="CreateContainer within sandbox \"135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\"" Dec 13 02:22:24.109865 env[1758]: time="2024-12-13T02:22:24.109792970Z" level=info msg="StartContainer for \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\"" Dec 13 02:22:24.192437 env[1758]: time="2024-12-13T02:22:24.192375371Z" level=info msg="StartContainer for \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\" returns successfully" Dec 13 02:22:24.764769 kubelet[2833]: I1213 02:22:24.764739 2833 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9zcdm" podStartSLOduration=4.764690816 podStartE2EDuration="4.764690816s" podCreationTimestamp="2024-12-13 02:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:22:21.725110894 +0000 UTC m=+13.555085423" watchObservedRunningTime="2024-12-13 02:22:24.764690816 +0000 UTC m=+16.594665344" Dec 13 02:22:28.588100 kubelet[2833]: I1213 02:22:28.588055 2833 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-lxtl9" podStartSLOduration=5.718603034 podStartE2EDuration="8.587986567s" podCreationTimestamp="2024-12-13 02:22:20 +0000 UTC" firstStartedPulling="2024-12-13 02:22:21.181432111 +0000 UTC m=+13.011406634" lastFinishedPulling="2024-12-13 02:22:24.050815647 +0000 UTC m=+15.880790167" observedRunningTime="2024-12-13 02:22:24.766298204 +0000 UTC m=+16.596272710" watchObservedRunningTime="2024-12-13 02:22:28.587986567 +0000 UTC m=+20.417961096" Dec 13 02:22:31.852029 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3438565004.mount: Deactivated successfully. Dec 13 02:22:35.832970 env[1758]: time="2024-12-13T02:22:35.832919996Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:35.976081 env[1758]: time="2024-12-13T02:22:35.976031841Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:35.979161 env[1758]: time="2024-12-13T02:22:35.979116342Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:22:35.980231 env[1758]: time="2024-12-13T02:22:35.980190787Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:22:35.985167 env[1758]: time="2024-12-13T02:22:35.985120580Z" level=info msg="CreateContainer within sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:22:36.006372 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3294225306.mount: Deactivated successfully. Dec 13 02:22:36.013697 env[1758]: time="2024-12-13T02:22:36.013000123Z" level=info msg="CreateContainer within sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa\"" Dec 13 02:22:36.014608 env[1758]: time="2024-12-13T02:22:36.014560570Z" level=info msg="StartContainer for \"8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa\"" Dec 13 02:22:36.107605 env[1758]: time="2024-12-13T02:22:36.104763077Z" level=info msg="StartContainer for \"8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa\" returns successfully" Dec 13 02:22:36.255129 env[1758]: time="2024-12-13T02:22:36.255071140Z" level=info msg="shim disconnected" id=8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa Dec 13 02:22:36.255129 env[1758]: time="2024-12-13T02:22:36.255129732Z" level=warning msg="cleaning up after shim disconnected" id=8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa namespace=k8s.io Dec 13 02:22:36.255129 env[1758]: time="2024-12-13T02:22:36.255142690Z" level=info msg="cleaning up dead shim" Dec 13 02:22:36.271812 env[1758]: time="2024-12-13T02:22:36.271751885Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:22:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3276 runtime=io.containerd.runc.v2\n" Dec 13 02:22:36.816030 env[1758]: time="2024-12-13T02:22:36.815984994Z" level=info msg="CreateContainer within sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:22:36.840375 env[1758]: time="2024-12-13T02:22:36.840326414Z" level=info msg="CreateContainer within sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61\"" Dec 13 02:22:36.841365 env[1758]: time="2024-12-13T02:22:36.841334057Z" level=info msg="StartContainer for \"fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61\"" Dec 13 02:22:36.909379 env[1758]: time="2024-12-13T02:22:36.909331336Z" level=info msg="StartContainer for \"fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61\" returns successfully" Dec 13 02:22:36.926450 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:22:36.926770 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:22:36.927601 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:22:36.935897 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:22:36.958040 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:22:36.966785 env[1758]: time="2024-12-13T02:22:36.966686809Z" level=info msg="shim disconnected" id=fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61 Dec 13 02:22:36.966785 env[1758]: time="2024-12-13T02:22:36.966780060Z" level=warning msg="cleaning up after shim disconnected" id=fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61 namespace=k8s.io Dec 13 02:22:36.967080 env[1758]: time="2024-12-13T02:22:36.966792919Z" level=info msg="cleaning up dead shim" Dec 13 02:22:36.976365 env[1758]: time="2024-12-13T02:22:36.976313380Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:22:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3342 runtime=io.containerd.runc.v2\n" Dec 13 02:22:37.019636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa-rootfs.mount: Deactivated successfully. Dec 13 02:22:37.825450 env[1758]: time="2024-12-13T02:22:37.823305668Z" level=info msg="CreateContainer within sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:22:37.855393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount258012371.mount: Deactivated successfully. Dec 13 02:22:37.871631 env[1758]: time="2024-12-13T02:22:37.871579382Z" level=info msg="CreateContainer within sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3\"" Dec 13 02:22:37.873924 env[1758]: time="2024-12-13T02:22:37.873871332Z" level=info msg="StartContainer for \"28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3\"" Dec 13 02:22:37.963035 env[1758]: time="2024-12-13T02:22:37.962982853Z" level=info msg="StartContainer for \"28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3\" returns successfully" Dec 13 02:22:37.998882 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3-rootfs.mount: Deactivated successfully. Dec 13 02:22:38.017133 env[1758]: time="2024-12-13T02:22:38.017085068Z" level=info msg="shim disconnected" id=28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3 Dec 13 02:22:38.017477 env[1758]: time="2024-12-13T02:22:38.017139218Z" level=warning msg="cleaning up after shim disconnected" id=28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3 namespace=k8s.io Dec 13 02:22:38.017477 env[1758]: time="2024-12-13T02:22:38.017152190Z" level=info msg="cleaning up dead shim" Dec 13 02:22:38.030733 env[1758]: time="2024-12-13T02:22:38.030668514Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:22:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3401 runtime=io.containerd.runc.v2\n" Dec 13 02:22:38.825481 env[1758]: time="2024-12-13T02:22:38.825435900Z" level=info msg="CreateContainer within sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:22:38.860364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2736573484.mount: Deactivated successfully. Dec 13 02:22:38.864876 env[1758]: time="2024-12-13T02:22:38.864649297Z" level=info msg="CreateContainer within sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40\"" Dec 13 02:22:38.868363 env[1758]: time="2024-12-13T02:22:38.867856121Z" level=info msg="StartContainer for \"b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40\"" Dec 13 02:22:38.976245 env[1758]: time="2024-12-13T02:22:38.976194731Z" level=info msg="StartContainer for \"b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40\" returns successfully" Dec 13 02:22:39.012183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40-rootfs.mount: Deactivated successfully. Dec 13 02:22:39.020537 env[1758]: time="2024-12-13T02:22:39.020485173Z" level=info msg="shim disconnected" id=b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40 Dec 13 02:22:39.020537 env[1758]: time="2024-12-13T02:22:39.020535289Z" level=warning msg="cleaning up after shim disconnected" id=b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40 namespace=k8s.io Dec 13 02:22:39.020873 env[1758]: time="2024-12-13T02:22:39.020562406Z" level=info msg="cleaning up dead shim" Dec 13 02:22:39.033110 env[1758]: time="2024-12-13T02:22:39.033056726Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:22:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3457 runtime=io.containerd.runc.v2\n" Dec 13 02:22:39.852842 env[1758]: time="2024-12-13T02:22:39.852795147Z" level=info msg="CreateContainer within sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:22:39.913022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294558027.mount: Deactivated successfully. Dec 13 02:22:39.928454 env[1758]: time="2024-12-13T02:22:39.928404361Z" level=info msg="CreateContainer within sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\"" Dec 13 02:22:39.930111 env[1758]: time="2024-12-13T02:22:39.929083892Z" level=info msg="StartContainer for \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\"" Dec 13 02:22:40.001364 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1463974005.mount: Deactivated successfully. Dec 13 02:22:40.024564 env[1758]: time="2024-12-13T02:22:40.009113357Z" level=info msg="StartContainer for \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\" returns successfully" Dec 13 02:22:40.065332 systemd[1]: run-containerd-runc-k8s.io-d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50-runc.2arY5H.mount: Deactivated successfully. Dec 13 02:22:40.345022 kubelet[2833]: I1213 02:22:40.344638 2833 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:22:40.673929 kubelet[2833]: I1213 02:22:40.673790 2833 topology_manager.go:215] "Topology Admit Handler" podUID="8da0e162-6947-473b-b1c4-2dd77ae47393" podNamespace="kube-system" podName="coredns-76f75df574-hwcq6" Dec 13 02:22:40.687436 kubelet[2833]: I1213 02:22:40.687304 2833 topology_manager.go:215] "Topology Admit Handler" podUID="9ff33ac4-ff51-4e03-bbcb-1e060d2bb178" podNamespace="kube-system" podName="coredns-76f75df574-tlg86" Dec 13 02:22:40.746704 kubelet[2833]: I1213 02:22:40.746667 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld6fw\" (UniqueName: \"kubernetes.io/projected/8da0e162-6947-473b-b1c4-2dd77ae47393-kube-api-access-ld6fw\") pod \"coredns-76f75df574-hwcq6\" (UID: \"8da0e162-6947-473b-b1c4-2dd77ae47393\") " pod="kube-system/coredns-76f75df574-hwcq6" Dec 13 02:22:40.747040 kubelet[2833]: I1213 02:22:40.747024 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8da0e162-6947-473b-b1c4-2dd77ae47393-config-volume\") pod \"coredns-76f75df574-hwcq6\" (UID: \"8da0e162-6947-473b-b1c4-2dd77ae47393\") " pod="kube-system/coredns-76f75df574-hwcq6" Dec 13 02:22:40.747160 kubelet[2833]: I1213 02:22:40.747150 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p562k\" (UniqueName: \"kubernetes.io/projected/9ff33ac4-ff51-4e03-bbcb-1e060d2bb178-kube-api-access-p562k\") pod \"coredns-76f75df574-tlg86\" (UID: \"9ff33ac4-ff51-4e03-bbcb-1e060d2bb178\") " pod="kube-system/coredns-76f75df574-tlg86" Dec 13 02:22:40.747288 kubelet[2833]: I1213 02:22:40.747271 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ff33ac4-ff51-4e03-bbcb-1e060d2bb178-config-volume\") pod \"coredns-76f75df574-tlg86\" (UID: \"9ff33ac4-ff51-4e03-bbcb-1e060d2bb178\") " pod="kube-system/coredns-76f75df574-tlg86" Dec 13 02:22:40.983500 env[1758]: time="2024-12-13T02:22:40.983292577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwcq6,Uid:8da0e162-6947-473b-b1c4-2dd77ae47393,Namespace:kube-system,Attempt:0,}" Dec 13 02:22:41.011243 env[1758]: time="2024-12-13T02:22:41.011109890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tlg86,Uid:9ff33ac4-ff51-4e03-bbcb-1e060d2bb178,Namespace:kube-system,Attempt:0,}" Dec 13 02:22:48.719705 amazon-ssm-agent[1734]: 2024-12-13 02:22:48 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 02:22:56.215501 systemd[1]: Started sshd@5-172.31.30.169:22-139.178.68.195:54818.service. Dec 13 02:22:56.437606 sshd[3615]: Accepted publickey for core from 139.178.68.195 port 54818 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:22:56.441385 sshd[3615]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:22:56.458131 systemd[1]: Started session-6.scope. Dec 13 02:22:56.459043 systemd-logind[1749]: New session 6 of user core. Dec 13 02:22:56.865385 sshd[3615]: pam_unix(sshd:session): session closed for user core Dec 13 02:22:56.878785 systemd[1]: sshd@5-172.31.30.169:22-139.178.68.195:54818.service: Deactivated successfully. Dec 13 02:22:56.887179 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:22:56.887275 systemd-logind[1749]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:22:56.891388 systemd-logind[1749]: Removed session 6. Dec 13 02:23:01.903523 systemd[1]: Started sshd@6-172.31.30.169:22-139.178.68.195:54828.service. Dec 13 02:23:02.125529 sshd[3631]: Accepted publickey for core from 139.178.68.195 port 54828 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:02.128928 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:02.149409 systemd[1]: Started session-7.scope. Dec 13 02:23:02.151447 systemd-logind[1749]: New session 7 of user core. Dec 13 02:23:02.885044 sshd[3631]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:02.889870 systemd[1]: sshd@6-172.31.30.169:22-139.178.68.195:54828.service: Deactivated successfully. Dec 13 02:23:02.893280 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:23:02.893990 systemd-logind[1749]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:23:02.896185 systemd-logind[1749]: Removed session 7. Dec 13 02:23:04.078967 systemd-networkd[1439]: cilium_host: Link UP Dec 13 02:23:04.082617 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:23:04.085241 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:23:04.082726 systemd-networkd[1439]: cilium_net: Link UP Dec 13 02:23:04.083107 systemd-networkd[1439]: cilium_net: Gained carrier Dec 13 02:23:04.083645 systemd-networkd[1439]: cilium_host: Gained carrier Dec 13 02:23:04.087957 (udev-worker)[3651]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:04.090316 (udev-worker)[3652]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:04.448736 systemd-networkd[1439]: cilium_net: Gained IPv6LL Dec 13 02:23:04.465235 systemd-networkd[1439]: cilium_host: Gained IPv6LL Dec 13 02:23:04.635786 systemd-networkd[1439]: cilium_vxlan: Link UP Dec 13 02:23:04.635798 systemd-networkd[1439]: cilium_vxlan: Gained carrier Dec 13 02:23:06.144790 systemd-networkd[1439]: cilium_vxlan: Gained IPv6LL Dec 13 02:23:07.911700 systemd[1]: Started sshd@7-172.31.30.169:22-139.178.68.195:48632.service. Dec 13 02:23:08.113934 sshd[3735]: Accepted publickey for core from 139.178.68.195 port 48632 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:08.115665 sshd[3735]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:08.120921 systemd-logind[1749]: New session 8 of user core. Dec 13 02:23:08.122471 systemd[1]: Started session-8.scope. Dec 13 02:23:08.375195 sshd[3735]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:08.380268 systemd[1]: sshd@7-172.31.30.169:22-139.178.68.195:48632.service: Deactivated successfully. Dec 13 02:23:08.381504 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:23:08.382421 systemd-logind[1749]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:23:08.384347 systemd-logind[1749]: Removed session 8. Dec 13 02:23:10.033596 kernel: NET: Registered PF_ALG protocol family Dec 13 02:23:11.661197 (udev-worker)[3756]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:11.662937 (udev-worker)[4016]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:11.667723 systemd-networkd[1439]: lxc_health: Link UP Dec 13 02:23:11.678573 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:23:11.678681 systemd-networkd[1439]: lxc_health: Gained carrier Dec 13 02:23:12.210409 env[1758]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Dec 13 02:23:12.215120 systemd[1]: run-netns-cni\x2d4f334490\x2d416e\x2d2f2a\x2d7030\x2d402ee4dc0650.mount: Deactivated successfully. Dec 13 02:23:12.223383 env[1758]: level=warning msg="Errors encountered while deleting endpoint" error="[DELETE /endpoint/{id}][404] deleteEndpointIdNotFound " subsys=cilium-cni Dec 13 02:23:12.225140 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cf485e016c9d6dac7936c9c3c311c0f73577421726f80f50af961f8edf8c85c5-shm.mount: Deactivated successfully. Dec 13 02:23:12.226280 env[1758]: time="2024-12-13T02:23:12.226198249Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwcq6,Uid:8da0e162-6947-473b-b1c4-2dd77ae47393,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cf485e016c9d6dac7936c9c3c311c0f73577421726f80f50af961f8edf8c85c5\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Dec 13 02:23:12.238854 systemd[1]: run-netns-cni\x2d1e142a52\x2d59fd\x2dae31\x2d7c20\x2d5f1c1a2b0f72.mount: Deactivated successfully. Dec 13 02:23:12.239054 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-13c60f255686ddb4556b9bffb902f6b2a3a57703afebac4fb054621a1ddd7bec-shm.mount: Deactivated successfully. Dec 13 02:23:12.258251 env[1758]: time="2024-12-13T02:23:12.257895381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tlg86,Uid:9ff33ac4-ff51-4e03-bbcb-1e060d2bb178,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13c60f255686ddb4556b9bffb902f6b2a3a57703afebac4fb054621a1ddd7bec\": plugin type=\"cilium-cni\" name=\"cilium\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \"http:///var/run/cilium/cilium.sock/v1/config\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\nIs the agent running?" Dec 13 02:23:12.262737 kubelet[2833]: E1213 02:23:12.262691 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err=< Dec 13 02:23:12.262737 kubelet[2833]: rpc error: code = Unknown desc = failed to setup network for sandbox "cf485e016c9d6dac7936c9c3c311c0f73577421726f80f50af961f8edf8c85c5": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Dec 13 02:23:12.262737 kubelet[2833]: Is the agent running? Dec 13 02:23:12.262737 kubelet[2833]: > Dec 13 02:23:12.263663 kubelet[2833]: E1213 02:23:12.263642 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 13 02:23:12.263663 kubelet[2833]: rpc error: code = Unknown desc = failed to setup network for sandbox "cf485e016c9d6dac7936c9c3c311c0f73577421726f80f50af961f8edf8c85c5": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Dec 13 02:23:12.263663 kubelet[2833]: Is the agent running? Dec 13 02:23:12.263663 kubelet[2833]: > pod="kube-system/coredns-76f75df574-hwcq6" Dec 13 02:23:12.264286 kubelet[2833]: E1213 02:23:12.264150 2833 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err=< Dec 13 02:23:12.264286 kubelet[2833]: rpc error: code = Unknown desc = failed to setup network for sandbox "cf485e016c9d6dac7936c9c3c311c0f73577421726f80f50af961f8edf8c85c5": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Dec 13 02:23:12.264286 kubelet[2833]: Is the agent running? Dec 13 02:23:12.264286 kubelet[2833]: > pod="kube-system/coredns-76f75df574-hwcq6" Dec 13 02:23:12.267021 kubelet[2833]: E1213 02:23:12.266997 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hwcq6_kube-system(8da0e162-6947-473b-b1c4-2dd77ae47393)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hwcq6_kube-system(8da0e162-6947-473b-b1c4-2dd77ae47393)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cf485e016c9d6dac7936c9c3c311c0f73577421726f80f50af961f8edf8c85c5\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-76f75df574-hwcq6" podUID="8da0e162-6947-473b-b1c4-2dd77ae47393" Dec 13 02:23:12.267991 kubelet[2833]: E1213 02:23:12.267970 2833 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err=< Dec 13 02:23:12.267991 kubelet[2833]: rpc error: code = Unknown desc = failed to setup network for sandbox "13c60f255686ddb4556b9bffb902f6b2a3a57703afebac4fb054621a1ddd7bec": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Dec 13 02:23:12.267991 kubelet[2833]: Is the agent running? Dec 13 02:23:12.267991 kubelet[2833]: > Dec 13 02:23:12.268301 kubelet[2833]: E1213 02:23:12.268273 2833 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Dec 13 02:23:12.268301 kubelet[2833]: rpc error: code = Unknown desc = failed to setup network for sandbox "13c60f255686ddb4556b9bffb902f6b2a3a57703afebac4fb054621a1ddd7bec": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Dec 13 02:23:12.268301 kubelet[2833]: Is the agent running? Dec 13 02:23:12.268301 kubelet[2833]: > pod="kube-system/coredns-76f75df574-tlg86" Dec 13 02:23:12.270874 kubelet[2833]: E1213 02:23:12.268771 2833 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err=< Dec 13 02:23:12.270874 kubelet[2833]: rpc error: code = Unknown desc = failed to setup network for sandbox "13c60f255686ddb4556b9bffb902f6b2a3a57703afebac4fb054621a1ddd7bec": plugin type="cilium-cni" name="cilium" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get "http:///var/run/cilium/cilium.sock/v1/config": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory Dec 13 02:23:12.270874 kubelet[2833]: Is the agent running? Dec 13 02:23:12.270874 kubelet[2833]: > pod="kube-system/coredns-76f75df574-tlg86" Dec 13 02:23:12.271711 kubelet[2833]: E1213 02:23:12.271377 2833 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-tlg86_kube-system(9ff33ac4-ff51-4e03-bbcb-1e060d2bb178)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-tlg86_kube-system(9ff33ac4-ff51-4e03-bbcb-1e060d2bb178)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13c60f255686ddb4556b9bffb902f6b2a3a57703afebac4fb054621a1ddd7bec\\\": plugin type=\\\"cilium-cni\\\" name=\\\"cilium\\\" failed (add): unable to connect to Cilium daemon: failed to create cilium agent client after 30.000000 seconds timeout: Get \\\"http:///var/run/cilium/cilium.sock/v1/config\\\": dial unix /var/run/cilium/cilium.sock: connect: no such file or directory\\nIs the agent running?\"" pod="kube-system/coredns-76f75df574-tlg86" podUID="9ff33ac4-ff51-4e03-bbcb-1e060d2bb178" Dec 13 02:23:12.525571 kubelet[2833]: I1213 02:23:12.525440 2833 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-m6jfj" podStartSLOduration=39.181720691 podStartE2EDuration="52.525368928s" podCreationTimestamp="2024-12-13 02:22:20 +0000 UTC" firstStartedPulling="2024-12-13 02:22:22.636847867 +0000 UTC m=+14.466822387" lastFinishedPulling="2024-12-13 02:22:35.980496106 +0000 UTC m=+27.810470624" observedRunningTime="2024-12-13 02:22:40.943036029 +0000 UTC m=+32.773010558" watchObservedRunningTime="2024-12-13 02:23:12.525368928 +0000 UTC m=+64.355343458" Dec 13 02:23:13.401850 systemd[1]: Started sshd@8-172.31.30.169:22-139.178.68.195:48636.service. Dec 13 02:23:13.504681 systemd-networkd[1439]: lxc_health: Gained IPv6LL Dec 13 02:23:13.758337 sshd[4046]: Accepted publickey for core from 139.178.68.195 port 48636 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:13.759625 sshd[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:13.768643 systemd[1]: Started session-9.scope. Dec 13 02:23:13.769680 systemd-logind[1749]: New session 9 of user core. Dec 13 02:23:14.183635 sshd[4046]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:14.189596 systemd-logind[1749]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:23:14.190681 systemd[1]: sshd@8-172.31.30.169:22-139.178.68.195:48636.service: Deactivated successfully. Dec 13 02:23:14.191919 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:23:14.193670 systemd-logind[1749]: Removed session 9. Dec 13 02:23:19.228990 systemd[1]: Started sshd@9-172.31.30.169:22-139.178.68.195:46910.service. Dec 13 02:23:19.440485 sshd[4065]: Accepted publickey for core from 139.178.68.195 port 46910 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:19.440910 sshd[4065]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:19.450535 systemd[1]: Started session-10.scope. Dec 13 02:23:19.451393 systemd-logind[1749]: New session 10 of user core. Dec 13 02:23:19.713088 sshd[4065]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:19.717569 systemd[1]: sshd@9-172.31.30.169:22-139.178.68.195:46910.service: Deactivated successfully. Dec 13 02:23:19.719202 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:23:19.719835 systemd-logind[1749]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:23:19.721047 systemd-logind[1749]: Removed session 10. Dec 13 02:23:19.738843 systemd[1]: Started sshd@10-172.31.30.169:22-139.178.68.195:46922.service. Dec 13 02:23:19.908745 sshd[4079]: Accepted publickey for core from 139.178.68.195 port 46922 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:19.910490 sshd[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:19.920033 systemd-logind[1749]: New session 11 of user core. Dec 13 02:23:19.920887 systemd[1]: Started session-11.scope. Dec 13 02:23:20.219019 sshd[4079]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:20.226068 systemd-logind[1749]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:23:20.228753 systemd[1]: sshd@10-172.31.30.169:22-139.178.68.195:46922.service: Deactivated successfully. Dec 13 02:23:20.229955 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:23:20.239343 systemd[1]: Started sshd@11-172.31.30.169:22-139.178.68.195:46938.service. Dec 13 02:23:20.239508 systemd-logind[1749]: Removed session 11. Dec 13 02:23:20.426689 sshd[4090]: Accepted publickey for core from 139.178.68.195 port 46938 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:20.429103 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:20.437237 systemd[1]: Started session-12.scope. Dec 13 02:23:20.437809 systemd-logind[1749]: New session 12 of user core. Dec 13 02:23:20.676197 sshd[4090]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:20.682072 systemd[1]: sshd@11-172.31.30.169:22-139.178.68.195:46938.service: Deactivated successfully. Dec 13 02:23:20.684295 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:23:20.685690 systemd-logind[1749]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:23:20.688836 systemd-logind[1749]: Removed session 12. Dec 13 02:23:23.533716 env[1758]: time="2024-12-13T02:23:23.533663509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwcq6,Uid:8da0e162-6947-473b-b1c4-2dd77ae47393,Namespace:kube-system,Attempt:0,}" Dec 13 02:23:23.581415 systemd-networkd[1439]: lxc70799d951376: Link UP Dec 13 02:23:23.589731 kernel: eth0: renamed from tmp7f313 Dec 13 02:23:23.592948 (udev-worker)[4116]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:23.595430 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:23:23.596118 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc70799d951376: link becomes ready Dec 13 02:23:23.595754 systemd-networkd[1439]: lxc70799d951376: Gained carrier Dec 13 02:23:23.853795 env[1758]: time="2024-12-13T02:23:23.853697450Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:23:23.853795 env[1758]: time="2024-12-13T02:23:23.853769511Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:23:23.854054 env[1758]: time="2024-12-13T02:23:23.854007088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:23:23.854532 env[1758]: time="2024-12-13T02:23:23.854457737Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f313f07a20579b32fa61abcb822c7d6044402a9d79488f6960ada7a67865578 pid=4131 runtime=io.containerd.runc.v2 Dec 13 02:23:23.956250 env[1758]: time="2024-12-13T02:23:23.956206825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hwcq6,Uid:8da0e162-6947-473b-b1c4-2dd77ae47393,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f313f07a20579b32fa61abcb822c7d6044402a9d79488f6960ada7a67865578\"" Dec 13 02:23:23.963055 env[1758]: time="2024-12-13T02:23:23.962768790Z" level=info msg="CreateContainer within sandbox \"7f313f07a20579b32fa61abcb822c7d6044402a9d79488f6960ada7a67865578\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:23:24.042908 env[1758]: time="2024-12-13T02:23:24.042855217Z" level=info msg="CreateContainer within sandbox \"7f313f07a20579b32fa61abcb822c7d6044402a9d79488f6960ada7a67865578\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a35a8e42950b3bdc36e578602f7e3612a38fb86546f01032efaf9e399a178d67\"" Dec 13 02:23:24.043661 env[1758]: time="2024-12-13T02:23:24.043631337Z" level=info msg="StartContainer for \"a35a8e42950b3bdc36e578602f7e3612a38fb86546f01032efaf9e399a178d67\"" Dec 13 02:23:24.120775 env[1758]: time="2024-12-13T02:23:24.120105525Z" level=info msg="StartContainer for \"a35a8e42950b3bdc36e578602f7e3612a38fb86546f01032efaf9e399a178d67\" returns successfully" Dec 13 02:23:24.552089 systemd[1]: run-containerd-runc-k8s.io-7f313f07a20579b32fa61abcb822c7d6044402a9d79488f6960ada7a67865578-runc.7WXuWX.mount: Deactivated successfully. Dec 13 02:23:25.534564 env[1758]: time="2024-12-13T02:23:25.534494283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tlg86,Uid:9ff33ac4-ff51-4e03-bbcb-1e060d2bb178,Namespace:kube-system,Attempt:0,}" Dec 13 02:23:25.537678 systemd-networkd[1439]: lxc70799d951376: Gained IPv6LL Dec 13 02:23:25.660469 systemd-networkd[1439]: lxcac998dec2668: Link UP Dec 13 02:23:25.674067 (udev-worker)[4121]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:23:25.682661 kernel: eth0: renamed from tmp0cbab Dec 13 02:23:25.694897 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 02:23:25.695040 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcac998dec2668: link becomes ready Dec 13 02:23:25.695519 systemd-networkd[1439]: lxcac998dec2668: Gained carrier Dec 13 02:23:25.709092 systemd[1]: Started sshd@12-172.31.30.169:22-139.178.68.195:46940.service. Dec 13 02:23:25.943503 sshd[4216]: Accepted publickey for core from 139.178.68.195 port 46940 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:25.948043 sshd[4216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:25.954914 env[1758]: time="2024-12-13T02:23:25.954832430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:23:25.954914 env[1758]: time="2024-12-13T02:23:25.954875909Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:23:25.954914 env[1758]: time="2024-12-13T02:23:25.954891545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:23:25.955409 env[1758]: time="2024-12-13T02:23:25.955368421Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0cbabb866e6b847b7204780e3e8ef02247367263c339374ca917c155615b343d pid=4227 runtime=io.containerd.runc.v2 Dec 13 02:23:25.969716 systemd[1]: Started session-13.scope. Dec 13 02:23:25.970528 systemd-logind[1749]: New session 13 of user core. Dec 13 02:23:26.012510 systemd[1]: run-containerd-runc-k8s.io-0cbabb866e6b847b7204780e3e8ef02247367263c339374ca917c155615b343d-runc.sAUI7b.mount: Deactivated successfully. Dec 13 02:23:26.064952 env[1758]: time="2024-12-13T02:23:26.064905803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-tlg86,Uid:9ff33ac4-ff51-4e03-bbcb-1e060d2bb178,Namespace:kube-system,Attempt:0,} returns sandbox id \"0cbabb866e6b847b7204780e3e8ef02247367263c339374ca917c155615b343d\"" Dec 13 02:23:26.070245 env[1758]: time="2024-12-13T02:23:26.070170261Z" level=info msg="CreateContainer within sandbox \"0cbabb866e6b847b7204780e3e8ef02247367263c339374ca917c155615b343d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:23:26.105863 env[1758]: time="2024-12-13T02:23:26.105819728Z" level=info msg="CreateContainer within sandbox \"0cbabb866e6b847b7204780e3e8ef02247367263c339374ca917c155615b343d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c5dc15c6f2dc0b2961ec8673929b9be9019088217c98694153f0b3a77c61a1ed\"" Dec 13 02:23:26.107020 env[1758]: time="2024-12-13T02:23:26.106985987Z" level=info msg="StartContainer for \"c5dc15c6f2dc0b2961ec8673929b9be9019088217c98694153f0b3a77c61a1ed\"" Dec 13 02:23:26.200430 env[1758]: time="2024-12-13T02:23:26.200313035Z" level=info msg="StartContainer for \"c5dc15c6f2dc0b2961ec8673929b9be9019088217c98694153f0b3a77c61a1ed\" returns successfully" Dec 13 02:23:26.262079 sshd[4216]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:26.273596 systemd[1]: sshd@12-172.31.30.169:22-139.178.68.195:46940.service: Deactivated successfully. Dec 13 02:23:26.274934 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:23:26.276209 systemd-logind[1749]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:23:26.277719 systemd-logind[1749]: Removed session 13. Dec 13 02:23:26.611849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1100946588.mount: Deactivated successfully. Dec 13 02:23:26.944750 systemd-networkd[1439]: lxcac998dec2668: Gained IPv6LL Dec 13 02:23:27.069530 kubelet[2833]: I1213 02:23:27.069494 2833 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hwcq6" podStartSLOduration=67.069454849 podStartE2EDuration="1m7.069454849s" podCreationTimestamp="2024-12-13 02:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:23:25.041596994 +0000 UTC m=+76.871571523" watchObservedRunningTime="2024-12-13 02:23:27.069454849 +0000 UTC m=+78.899429376" Dec 13 02:23:27.070225 kubelet[2833]: I1213 02:23:27.069619 2833 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-tlg86" podStartSLOduration=67.069595706 podStartE2EDuration="1m7.069595706s" podCreationTimestamp="2024-12-13 02:22:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:23:27.069117969 +0000 UTC m=+78.899092497" watchObservedRunningTime="2024-12-13 02:23:27.069595706 +0000 UTC m=+78.899570234" Dec 13 02:23:31.288203 systemd[1]: Started sshd@13-172.31.30.169:22-139.178.68.195:54140.service. Dec 13 02:23:31.471492 sshd[4324]: Accepted publickey for core from 139.178.68.195 port 54140 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:31.474201 sshd[4324]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:31.481710 systemd[1]: Started session-14.scope. Dec 13 02:23:31.483017 systemd-logind[1749]: New session 14 of user core. Dec 13 02:23:31.765285 sshd[4324]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:31.769588 systemd[1]: sshd@13-172.31.30.169:22-139.178.68.195:54140.service: Deactivated successfully. Dec 13 02:23:31.771843 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:23:31.772188 systemd-logind[1749]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:23:31.773742 systemd-logind[1749]: Removed session 14. Dec 13 02:23:31.790425 systemd[1]: Started sshd@14-172.31.30.169:22-139.178.68.195:54144.service. Dec 13 02:23:31.960481 sshd[4337]: Accepted publickey for core from 139.178.68.195 port 54144 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:31.962395 sshd[4337]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:31.968826 systemd[1]: Started session-15.scope. Dec 13 02:23:31.969297 systemd-logind[1749]: New session 15 of user core. Dec 13 02:23:32.823894 sshd[4337]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:32.851927 systemd[1]: sshd@14-172.31.30.169:22-139.178.68.195:54144.service: Deactivated successfully. Dec 13 02:23:32.860904 systemd-logind[1749]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:23:32.865569 systemd[1]: Started sshd@15-172.31.30.169:22-139.178.68.195:54148.service. Dec 13 02:23:32.867560 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:23:32.869070 systemd-logind[1749]: Removed session 15. Dec 13 02:23:33.052734 sshd[4347]: Accepted publickey for core from 139.178.68.195 port 54148 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:33.054980 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:33.063599 systemd[1]: Started session-16.scope. Dec 13 02:23:33.066620 systemd-logind[1749]: New session 16 of user core. Dec 13 02:23:35.536872 sshd[4347]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:35.550869 systemd[1]: sshd@15-172.31.30.169:22-139.178.68.195:54148.service: Deactivated successfully. Dec 13 02:23:35.554449 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:23:35.559627 systemd-logind[1749]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:23:35.565791 systemd[1]: Started sshd@16-172.31.30.169:22-139.178.68.195:54162.service. Dec 13 02:23:35.569310 systemd-logind[1749]: Removed session 16. Dec 13 02:23:35.749884 sshd[4367]: Accepted publickey for core from 139.178.68.195 port 54162 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:35.752476 sshd[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:35.759685 systemd[1]: Started session-17.scope. Dec 13 02:23:35.760000 systemd-logind[1749]: New session 17 of user core. Dec 13 02:23:36.292867 sshd[4367]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:36.297055 systemd[1]: sshd@16-172.31.30.169:22-139.178.68.195:54162.service: Deactivated successfully. Dec 13 02:23:36.299046 systemd-logind[1749]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:23:36.299132 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:23:36.300618 systemd-logind[1749]: Removed session 17. Dec 13 02:23:36.317216 systemd[1]: Started sshd@17-172.31.30.169:22-139.178.68.195:34356.service. Dec 13 02:23:36.483039 sshd[4379]: Accepted publickey for core from 139.178.68.195 port 34356 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:36.485390 sshd[4379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:36.493369 systemd[1]: Started session-18.scope. Dec 13 02:23:36.494981 systemd-logind[1749]: New session 18 of user core. Dec 13 02:23:36.736131 sshd[4379]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:36.743762 systemd[1]: sshd@17-172.31.30.169:22-139.178.68.195:34356.service: Deactivated successfully. Dec 13 02:23:36.745141 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:23:36.746213 systemd-logind[1749]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:23:36.748145 systemd-logind[1749]: Removed session 18. Dec 13 02:23:41.760282 systemd[1]: Started sshd@18-172.31.30.169:22-139.178.68.195:34366.service. Dec 13 02:23:41.937529 sshd[4392]: Accepted publickey for core from 139.178.68.195 port 34366 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:41.942039 sshd[4392]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:41.956156 systemd-logind[1749]: New session 19 of user core. Dec 13 02:23:41.956331 systemd[1]: Started session-19.scope. Dec 13 02:23:42.242518 sshd[4392]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:42.249654 systemd[1]: sshd@18-172.31.30.169:22-139.178.68.195:34366.service: Deactivated successfully. Dec 13 02:23:42.251714 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:23:42.251758 systemd-logind[1749]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:23:42.253720 systemd-logind[1749]: Removed session 19. Dec 13 02:23:47.270805 systemd[1]: Started sshd@19-172.31.30.169:22-139.178.68.195:58888.service. Dec 13 02:23:47.463645 sshd[4408]: Accepted publickey for core from 139.178.68.195 port 58888 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:47.469339 sshd[4408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:47.491082 systemd-logind[1749]: New session 20 of user core. Dec 13 02:23:47.491498 systemd[1]: Started session-20.scope. Dec 13 02:23:47.683606 sshd[4408]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:47.690122 systemd[1]: sshd@19-172.31.30.169:22-139.178.68.195:58888.service: Deactivated successfully. Dec 13 02:23:47.690613 systemd-logind[1749]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:23:47.691170 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:23:47.701974 systemd-logind[1749]: Removed session 20. Dec 13 02:23:52.708361 systemd[1]: Started sshd@20-172.31.30.169:22-139.178.68.195:58898.service. Dec 13 02:23:52.876574 sshd[4421]: Accepted publickey for core from 139.178.68.195 port 58898 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:52.878173 sshd[4421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:52.884714 systemd-logind[1749]: New session 21 of user core. Dec 13 02:23:52.885067 systemd[1]: Started session-21.scope. Dec 13 02:23:53.100193 sshd[4421]: pam_unix(sshd:session): session closed for user core Dec 13 02:23:53.103496 systemd-logind[1749]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:23:53.103872 systemd[1]: sshd@20-172.31.30.169:22-139.178.68.195:58898.service: Deactivated successfully. Dec 13 02:23:53.105046 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:23:53.105672 systemd-logind[1749]: Removed session 21. Dec 13 02:23:53.126578 systemd[1]: Started sshd@21-172.31.30.169:22-139.178.68.195:58910.service. Dec 13 02:23:53.289962 sshd[4434]: Accepted publickey for core from 139.178.68.195 port 58910 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:23:53.291534 sshd[4434]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:23:53.299690 systemd[1]: Started session-22.scope. Dec 13 02:23:53.300613 systemd-logind[1749]: New session 22 of user core. Dec 13 02:24:00.369829 env[1758]: time="2024-12-13T02:24:00.369476352Z" level=info msg="StopContainer for \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\" with timeout 30 (s)" Dec 13 02:24:00.371917 env[1758]: time="2024-12-13T02:24:00.371869995Z" level=info msg="Stop container \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\" with signal terminated" Dec 13 02:24:00.437497 systemd[1]: run-containerd-runc-k8s.io-d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50-runc.reYCvc.mount: Deactivated successfully. Dec 13 02:24:00.486550 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0-rootfs.mount: Deactivated successfully. Dec 13 02:24:00.502697 env[1758]: time="2024-12-13T02:24:00.502655994Z" level=info msg="shim disconnected" id=ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0 Dec 13 02:24:00.503017 env[1758]: time="2024-12-13T02:24:00.502999117Z" level=warning msg="cleaning up after shim disconnected" id=ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0 namespace=k8s.io Dec 13 02:24:00.503118 env[1758]: time="2024-12-13T02:24:00.503106214Z" level=info msg="cleaning up dead shim" Dec 13 02:24:00.511716 env[1758]: time="2024-12-13T02:24:00.511660806Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4484 runtime=io.containerd.runc.v2\n" Dec 13 02:24:00.515401 env[1758]: time="2024-12-13T02:24:00.515357342Z" level=info msg="StopContainer for \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\" returns successfully" Dec 13 02:24:00.516086 env[1758]: time="2024-12-13T02:24:00.516049778Z" level=info msg="StopPodSandbox for \"135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b\"" Dec 13 02:24:00.520027 env[1758]: time="2024-12-13T02:24:00.516131785Z" level=info msg="Container to stop \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:00.519842 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b-shm.mount: Deactivated successfully. Dec 13 02:24:00.568054 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b-rootfs.mount: Deactivated successfully. Dec 13 02:24:00.585196 env[1758]: time="2024-12-13T02:24:00.585138102Z" level=info msg="shim disconnected" id=135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b Dec 13 02:24:00.585196 env[1758]: time="2024-12-13T02:24:00.585198493Z" level=warning msg="cleaning up after shim disconnected" id=135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b namespace=k8s.io Dec 13 02:24:00.587781 env[1758]: time="2024-12-13T02:24:00.585211413Z" level=info msg="cleaning up dead shim" Dec 13 02:24:00.597192 env[1758]: time="2024-12-13T02:24:00.597137216Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4517 runtime=io.containerd.runc.v2\n" Dec 13 02:24:00.597558 env[1758]: time="2024-12-13T02:24:00.597510128Z" level=info msg="TearDown network for sandbox \"135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b\" successfully" Dec 13 02:24:00.597558 env[1758]: time="2024-12-13T02:24:00.597537466Z" level=info msg="StopPodSandbox for \"135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b\" returns successfully" Dec 13 02:24:00.652123 env[1758]: time="2024-12-13T02:24:00.651888689Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:24:00.659857 env[1758]: time="2024-12-13T02:24:00.659815255Z" level=info msg="StopContainer for \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\" with timeout 2 (s)" Dec 13 02:24:00.660145 env[1758]: time="2024-12-13T02:24:00.660117040Z" level=info msg="Stop container \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\" with signal terminated" Dec 13 02:24:00.669165 systemd-networkd[1439]: lxc_health: Link DOWN Dec 13 02:24:00.669175 systemd-networkd[1439]: lxc_health: Lost carrier Dec 13 02:24:00.749862 kubelet[2833]: I1213 02:24:00.728869 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gl8zk\" (UniqueName: \"kubernetes.io/projected/e3010c74-ddb2-4a3e-b491-43e90efb9c1d-kube-api-access-gl8zk\") pod \"e3010c74-ddb2-4a3e-b491-43e90efb9c1d\" (UID: \"e3010c74-ddb2-4a3e-b491-43e90efb9c1d\") " Dec 13 02:24:00.749862 kubelet[2833]: I1213 02:24:00.729006 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3010c74-ddb2-4a3e-b491-43e90efb9c1d-cilium-config-path\") pod \"e3010c74-ddb2-4a3e-b491-43e90efb9c1d\" (UID: \"e3010c74-ddb2-4a3e-b491-43e90efb9c1d\") " Dec 13 02:24:00.749862 kubelet[2833]: I1213 02:24:00.742490 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3010c74-ddb2-4a3e-b491-43e90efb9c1d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e3010c74-ddb2-4a3e-b491-43e90efb9c1d" (UID: "e3010c74-ddb2-4a3e-b491-43e90efb9c1d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:24:00.777767 kubelet[2833]: I1213 02:24:00.777416 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3010c74-ddb2-4a3e-b491-43e90efb9c1d-kube-api-access-gl8zk" (OuterVolumeSpecName: "kube-api-access-gl8zk") pod "e3010c74-ddb2-4a3e-b491-43e90efb9c1d" (UID: "e3010c74-ddb2-4a3e-b491-43e90efb9c1d"). InnerVolumeSpecName "kube-api-access-gl8zk". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:00.830974 kubelet[2833]: I1213 02:24:00.830900 2833 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3010c74-ddb2-4a3e-b491-43e90efb9c1d-cilium-config-path\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:00.830974 kubelet[2833]: I1213 02:24:00.830942 2833 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-gl8zk\" (UniqueName: \"kubernetes.io/projected/e3010c74-ddb2-4a3e-b491-43e90efb9c1d-kube-api-access-gl8zk\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:00.845378 env[1758]: time="2024-12-13T02:24:00.845319178Z" level=info msg="shim disconnected" id=d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50 Dec 13 02:24:00.845378 env[1758]: time="2024-12-13T02:24:00.845376990Z" level=warning msg="cleaning up after shim disconnected" id=d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50 namespace=k8s.io Dec 13 02:24:00.845851 env[1758]: time="2024-12-13T02:24:00.845389149Z" level=info msg="cleaning up dead shim" Dec 13 02:24:00.856617 env[1758]: time="2024-12-13T02:24:00.856567438Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4554 runtime=io.containerd.runc.v2\n" Dec 13 02:24:00.860377 env[1758]: time="2024-12-13T02:24:00.860331492Z" level=info msg="StopContainer for \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\" returns successfully" Dec 13 02:24:00.860904 env[1758]: time="2024-12-13T02:24:00.860870589Z" level=info msg="StopPodSandbox for \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\"" Dec 13 02:24:00.861027 env[1758]: time="2024-12-13T02:24:00.860937903Z" level=info msg="Container to stop \"fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:00.861027 env[1758]: time="2024-12-13T02:24:00.860958049Z" level=info msg="Container to stop \"b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:00.861027 env[1758]: time="2024-12-13T02:24:00.860974434Z" level=info msg="Container to stop \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:00.861027 env[1758]: time="2024-12-13T02:24:00.860996062Z" level=info msg="Container to stop \"8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:00.861027 env[1758]: time="2024-12-13T02:24:00.861011357Z" level=info msg="Container to stop \"28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:00.902736 env[1758]: time="2024-12-13T02:24:00.902610588Z" level=info msg="shim disconnected" id=3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1 Dec 13 02:24:00.902736 env[1758]: time="2024-12-13T02:24:00.902666357Z" level=warning msg="cleaning up after shim disconnected" id=3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1 namespace=k8s.io Dec 13 02:24:00.902736 env[1758]: time="2024-12-13T02:24:00.902678285Z" level=info msg="cleaning up dead shim" Dec 13 02:24:00.916158 env[1758]: time="2024-12-13T02:24:00.916111873Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4588 runtime=io.containerd.runc.v2\n" Dec 13 02:24:00.916491 env[1758]: time="2024-12-13T02:24:00.916455030Z" level=info msg="TearDown network for sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" successfully" Dec 13 02:24:00.916614 env[1758]: time="2024-12-13T02:24:00.916485241Z" level=info msg="StopPodSandbox for \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" returns successfully" Dec 13 02:24:01.033529 kubelet[2833]: I1213 02:24:01.033484 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-hostproc\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.033529 kubelet[2833]: I1213 02:24:01.033560 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-lib-modules\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.033806 kubelet[2833]: I1213 02:24:01.033587 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-xtables-lock\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.033806 kubelet[2833]: I1213 02:24:01.033609 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-etc-cni-netd\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.033806 kubelet[2833]: I1213 02:24:01.033639 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/483e5960-5ffb-494a-b026-7de47696a7c0-hubble-tls\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.033806 kubelet[2833]: I1213 02:24:01.033661 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-cilium-run\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.033806 kubelet[2833]: I1213 02:24:01.033693 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cj22h\" (UniqueName: \"kubernetes.io/projected/483e5960-5ffb-494a-b026-7de47696a7c0-kube-api-access-cj22h\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.033806 kubelet[2833]: I1213 02:24:01.033716 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-cilium-cgroup\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.034094 kubelet[2833]: I1213 02:24:01.033741 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-host-proc-sys-kernel\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.034094 kubelet[2833]: I1213 02:24:01.033776 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/483e5960-5ffb-494a-b026-7de47696a7c0-cilium-config-path\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.034094 kubelet[2833]: I1213 02:24:01.033803 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-bpf-maps\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.034094 kubelet[2833]: I1213 02:24:01.033829 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-cni-path\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.034094 kubelet[2833]: I1213 02:24:01.033857 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-host-proc-sys-net\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.034094 kubelet[2833]: I1213 02:24:01.033926 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/483e5960-5ffb-494a-b026-7de47696a7c0-clustermesh-secrets\") pod \"483e5960-5ffb-494a-b026-7de47696a7c0\" (UID: \"483e5960-5ffb-494a-b026-7de47696a7c0\") " Dec 13 02:24:01.034915 kubelet[2833]: I1213 02:24:01.034883 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:01.035037 kubelet[2833]: I1213 02:24:01.034947 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:01.035440 kubelet[2833]: I1213 02:24:01.035413 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-hostproc" (OuterVolumeSpecName: "hostproc") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:01.035534 kubelet[2833]: I1213 02:24:01.035483 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:01.035534 kubelet[2833]: I1213 02:24:01.035510 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:01.035655 kubelet[2833]: I1213 02:24:01.035531 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:01.035766 kubelet[2833]: I1213 02:24:01.035748 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:01.035888 kubelet[2833]: I1213 02:24:01.035871 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-cni-path" (OuterVolumeSpecName: "cni-path") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:01.042992 kubelet[2833]: I1213 02:24:01.042942 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/483e5960-5ffb-494a-b026-7de47696a7c0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:24:01.043282 kubelet[2833]: I1213 02:24:01.043245 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:01.043420 kubelet[2833]: I1213 02:24:01.043405 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:01.043820 kubelet[2833]: I1213 02:24:01.043790 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/483e5960-5ffb-494a-b026-7de47696a7c0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:24:01.056838 kubelet[2833]: I1213 02:24:01.056752 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483e5960-5ffb-494a-b026-7de47696a7c0-kube-api-access-cj22h" (OuterVolumeSpecName: "kube-api-access-cj22h") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "kube-api-access-cj22h". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:01.059004 kubelet[2833]: I1213 02:24:01.058955 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/483e5960-5ffb-494a-b026-7de47696a7c0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "483e5960-5ffb-494a-b026-7de47696a7c0" (UID: "483e5960-5ffb-494a-b026-7de47696a7c0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:01.144480 kubelet[2833]: I1213 02:24:01.143936 2833 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/483e5960-5ffb-494a-b026-7de47696a7c0-cilium-config-path\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.144480 kubelet[2833]: I1213 02:24:01.143982 2833 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-bpf-maps\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.144480 kubelet[2833]: I1213 02:24:01.143998 2833 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-host-proc-sys-net\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.144480 kubelet[2833]: I1213 02:24:01.144013 2833 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/483e5960-5ffb-494a-b026-7de47696a7c0-clustermesh-secrets\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.144480 kubelet[2833]: I1213 02:24:01.144028 2833 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-cni-path\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.144480 kubelet[2833]: I1213 02:24:01.144309 2833 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-hostproc\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.144480 kubelet[2833]: I1213 02:24:01.144333 2833 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-lib-modules\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.144480 kubelet[2833]: I1213 02:24:01.144385 2833 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-xtables-lock\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.148032 kubelet[2833]: I1213 02:24:01.144412 2833 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-etc-cni-netd\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.148032 kubelet[2833]: I1213 02:24:01.144436 2833 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/483e5960-5ffb-494a-b026-7de47696a7c0-hubble-tls\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.148032 kubelet[2833]: I1213 02:24:01.144459 2833 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-cilium-cgroup\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.148032 kubelet[2833]: I1213 02:24:01.144475 2833 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-host-proc-sys-kernel\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.148032 kubelet[2833]: I1213 02:24:01.144487 2833 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/483e5960-5ffb-494a-b026-7de47696a7c0-cilium-run\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.148032 kubelet[2833]: I1213 02:24:01.145148 2833 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cj22h\" (UniqueName: \"kubernetes.io/projected/483e5960-5ffb-494a-b026-7de47696a7c0-kube-api-access-cj22h\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:01.176058 kubelet[2833]: I1213 02:24:01.175140 2833 scope.go:117] "RemoveContainer" containerID="ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0" Dec 13 02:24:01.180236 env[1758]: time="2024-12-13T02:24:01.180181393Z" level=info msg="RemoveContainer for \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\"" Dec 13 02:24:01.192483 env[1758]: time="2024-12-13T02:24:01.192203569Z" level=info msg="RemoveContainer for \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\" returns successfully" Dec 13 02:24:01.194685 kubelet[2833]: I1213 02:24:01.194122 2833 scope.go:117] "RemoveContainer" containerID="ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0" Dec 13 02:24:01.196163 env[1758]: time="2024-12-13T02:24:01.195998832Z" level=error msg="ContainerStatus for \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\": not found" Dec 13 02:24:01.218646 kubelet[2833]: E1213 02:24:01.218606 2833 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\": not found" containerID="ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0" Dec 13 02:24:01.218899 kubelet[2833]: I1213 02:24:01.218881 2833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0"} err="failed to get container status \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef9d0771c5ce2ee4aa9e2ae38c6be4622a766cf0b8f5a933f720b90b5e9d46b0\": not found" Dec 13 02:24:01.219059 kubelet[2833]: I1213 02:24:01.219042 2833 scope.go:117] "RemoveContainer" containerID="d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50" Dec 13 02:24:01.222372 env[1758]: time="2024-12-13T02:24:01.221845166Z" level=info msg="RemoveContainer for \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\"" Dec 13 02:24:01.239777 env[1758]: time="2024-12-13T02:24:01.239634052Z" level=info msg="RemoveContainer for \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\" returns successfully" Dec 13 02:24:01.242303 kubelet[2833]: I1213 02:24:01.242249 2833 scope.go:117] "RemoveContainer" containerID="b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40" Dec 13 02:24:01.250703 env[1758]: time="2024-12-13T02:24:01.250569732Z" level=info msg="RemoveContainer for \"b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40\"" Dec 13 02:24:01.257475 env[1758]: time="2024-12-13T02:24:01.257425203Z" level=info msg="RemoveContainer for \"b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40\" returns successfully" Dec 13 02:24:01.257820 kubelet[2833]: I1213 02:24:01.257785 2833 scope.go:117] "RemoveContainer" containerID="28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3" Dec 13 02:24:01.266283 env[1758]: time="2024-12-13T02:24:01.266096661Z" level=info msg="RemoveContainer for \"28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3\"" Dec 13 02:24:01.274403 env[1758]: time="2024-12-13T02:24:01.274353448Z" level=info msg="RemoveContainer for \"28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3\" returns successfully" Dec 13 02:24:01.277471 kubelet[2833]: I1213 02:24:01.277433 2833 scope.go:117] "RemoveContainer" containerID="fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61" Dec 13 02:24:01.284712 env[1758]: time="2024-12-13T02:24:01.284580108Z" level=info msg="RemoveContainer for \"fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61\"" Dec 13 02:24:01.319305 env[1758]: time="2024-12-13T02:24:01.319248923Z" level=info msg="RemoveContainer for \"fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61\" returns successfully" Dec 13 02:24:01.319842 kubelet[2833]: I1213 02:24:01.319792 2833 scope.go:117] "RemoveContainer" containerID="8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa" Dec 13 02:24:01.326279 env[1758]: time="2024-12-13T02:24:01.326234056Z" level=info msg="RemoveContainer for \"8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa\"" Dec 13 02:24:01.333798 env[1758]: time="2024-12-13T02:24:01.333749140Z" level=info msg="RemoveContainer for \"8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa\" returns successfully" Dec 13 02:24:01.334210 kubelet[2833]: I1213 02:24:01.334182 2833 scope.go:117] "RemoveContainer" containerID="d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50" Dec 13 02:24:01.334822 env[1758]: time="2024-12-13T02:24:01.334564010Z" level=error msg="ContainerStatus for \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\": not found" Dec 13 02:24:01.335001 kubelet[2833]: E1213 02:24:01.334975 2833 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\": not found" containerID="d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50" Dec 13 02:24:01.335269 kubelet[2833]: I1213 02:24:01.335025 2833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50"} err="failed to get container status \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\": rpc error: code = NotFound desc = an error occurred when try to find container \"d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50\": not found" Dec 13 02:24:01.335269 kubelet[2833]: I1213 02:24:01.335046 2833 scope.go:117] "RemoveContainer" containerID="b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40" Dec 13 02:24:01.338359 kubelet[2833]: E1213 02:24:01.338239 2833 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40\": not found" containerID="b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40" Dec 13 02:24:01.338359 kubelet[2833]: I1213 02:24:01.338290 2833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40"} err="failed to get container status \"b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40\": rpc error: code = NotFound desc = an error occurred when try to find container \"b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40\": not found" Dec 13 02:24:01.338359 kubelet[2833]: I1213 02:24:01.338306 2833 scope.go:117] "RemoveContainer" containerID="28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3" Dec 13 02:24:01.339264 env[1758]: time="2024-12-13T02:24:01.335370891Z" level=error msg="ContainerStatus for \"b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b41791d5c6942f6188e017470d67334e173f207ab60fa9321b4fdd2759dfdd40\": not found" Dec 13 02:24:01.339264 env[1758]: time="2024-12-13T02:24:01.338720701Z" level=error msg="ContainerStatus for \"28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3\": not found" Dec 13 02:24:01.339408 kubelet[2833]: E1213 02:24:01.338922 2833 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3\": not found" containerID="28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3" Dec 13 02:24:01.339408 kubelet[2833]: I1213 02:24:01.338964 2833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3"} err="failed to get container status \"28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3\": rpc error: code = NotFound desc = an error occurred when try to find container \"28b0cdf32e6030304103f7d24a4947aaa4c26f27c10a92357bb3449280d700a3\": not found" Dec 13 02:24:01.339408 kubelet[2833]: I1213 02:24:01.338979 2833 scope.go:117] "RemoveContainer" containerID="fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61" Dec 13 02:24:01.339562 env[1758]: time="2024-12-13T02:24:01.339400402Z" level=error msg="ContainerStatus for \"fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61\": not found" Dec 13 02:24:01.339620 kubelet[2833]: E1213 02:24:01.339599 2833 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61\": not found" containerID="fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61" Dec 13 02:24:01.339668 kubelet[2833]: I1213 02:24:01.339637 2833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61"} err="failed to get container status \"fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61\": rpc error: code = NotFound desc = an error occurred when try to find container \"fde5e98a3b163d5066517f44d8afbb6d5619434332d51e2eded82a5e8278bc61\": not found" Dec 13 02:24:01.339668 kubelet[2833]: I1213 02:24:01.339652 2833 scope.go:117] "RemoveContainer" containerID="8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa" Dec 13 02:24:01.341803 env[1758]: time="2024-12-13T02:24:01.339851482Z" level=error msg="ContainerStatus for \"8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa\": not found" Dec 13 02:24:01.342123 kubelet[2833]: E1213 02:24:01.342084 2833 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa\": not found" containerID="8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa" Dec 13 02:24:01.342236 kubelet[2833]: I1213 02:24:01.342132 2833 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa"} err="failed to get container status \"8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa\": rpc error: code = NotFound desc = an error occurred when try to find container \"8e4368fe42d72b02a22d5139c1b55652ac7b204daf3858b50b56e2e7d92855fa\": not found" Dec 13 02:24:01.398301 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3e2cb9509fb1b9ad8a06dbae756dd77c44d31d5cde02df24f4dadbb539eba50-rootfs.mount: Deactivated successfully. Dec 13 02:24:01.398511 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1-rootfs.mount: Deactivated successfully. Dec 13 02:24:01.398652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1-shm.mount: Deactivated successfully. Dec 13 02:24:01.398796 systemd[1]: var-lib-kubelet-pods-483e5960\x2d5ffb\x2d494a\x2db026\x2d7de47696a7c0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:24:01.399535 systemd[1]: var-lib-kubelet-pods-e3010c74\x2dddb2\x2d4a3e\x2db491\x2d43e90efb9c1d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgl8zk.mount: Deactivated successfully. Dec 13 02:24:01.399739 systemd[1]: var-lib-kubelet-pods-483e5960\x2d5ffb\x2d494a\x2db026\x2d7de47696a7c0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcj22h.mount: Deactivated successfully. Dec 13 02:24:01.400620 systemd[1]: var-lib-kubelet-pods-483e5960\x2d5ffb\x2d494a\x2db026\x2d7de47696a7c0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:24:02.213280 sshd[4434]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:02.266421 systemd[1]: sshd@21-172.31.30.169:22-139.178.68.195:58910.service: Deactivated successfully. Dec 13 02:24:02.301419 systemd[1]: Started sshd@22-172.31.30.169:22-139.178.68.195:56720.service. Dec 13 02:24:02.302791 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:24:02.311060 systemd-logind[1749]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:24:02.314149 systemd-logind[1749]: Removed session 22. Dec 13 02:24:02.532798 sshd[4606]: Accepted publickey for core from 139.178.68.195 port 56720 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:02.540999 sshd[4606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:02.541985 kubelet[2833]: I1213 02:24:02.541866 2833 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="483e5960-5ffb-494a-b026-7de47696a7c0" path="/var/lib/kubelet/pods/483e5960-5ffb-494a-b026-7de47696a7c0/volumes" Dec 13 02:24:02.549204 kubelet[2833]: I1213 02:24:02.548589 2833 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="e3010c74-ddb2-4a3e-b491-43e90efb9c1d" path="/var/lib/kubelet/pods/e3010c74-ddb2-4a3e-b491-43e90efb9c1d/volumes" Dec 13 02:24:02.578940 systemd-logind[1749]: New session 23 of user core. Dec 13 02:24:02.579843 systemd[1]: Started session-23.scope. Dec 13 02:24:03.709315 sshd[4606]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:03.715599 kubelet[2833]: I1213 02:24:03.712597 2833 topology_manager.go:215] "Topology Admit Handler" podUID="291276cd-35d2-4b77-a696-a976076d9880" podNamespace="kube-system" podName="cilium-zj92k" Dec 13 02:24:03.713978 systemd[1]: sshd@22-172.31.30.169:22-139.178.68.195:56720.service: Deactivated successfully. Dec 13 02:24:03.715083 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:24:03.717966 systemd-logind[1749]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:24:03.719225 kubelet[2833]: E1213 02:24:03.718956 2833 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="483e5960-5ffb-494a-b026-7de47696a7c0" containerName="apply-sysctl-overwrites" Dec 13 02:24:03.719225 kubelet[2833]: E1213 02:24:03.719009 2833 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="483e5960-5ffb-494a-b026-7de47696a7c0" containerName="mount-bpf-fs" Dec 13 02:24:03.719225 kubelet[2833]: E1213 02:24:03.719028 2833 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="e3010c74-ddb2-4a3e-b491-43e90efb9c1d" containerName="cilium-operator" Dec 13 02:24:03.719225 kubelet[2833]: E1213 02:24:03.719038 2833 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="483e5960-5ffb-494a-b026-7de47696a7c0" containerName="mount-cgroup" Dec 13 02:24:03.719225 kubelet[2833]: E1213 02:24:03.719048 2833 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="483e5960-5ffb-494a-b026-7de47696a7c0" containerName="clean-cilium-state" Dec 13 02:24:03.719225 kubelet[2833]: E1213 02:24:03.719058 2833 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="483e5960-5ffb-494a-b026-7de47696a7c0" containerName="cilium-agent" Dec 13 02:24:03.720936 systemd-logind[1749]: Removed session 23. Dec 13 02:24:03.721978 kubelet[2833]: I1213 02:24:03.721945 2833 memory_manager.go:354] "RemoveStaleState removing state" podUID="e3010c74-ddb2-4a3e-b491-43e90efb9c1d" containerName="cilium-operator" Dec 13 02:24:03.722086 kubelet[2833]: I1213 02:24:03.721993 2833 memory_manager.go:354] "RemoveStaleState removing state" podUID="483e5960-5ffb-494a-b026-7de47696a7c0" containerName="cilium-agent" Dec 13 02:24:03.736836 systemd[1]: Started sshd@23-172.31.30.169:22-139.178.68.195:56732.service. Dec 13 02:24:03.813240 kubelet[2833]: E1213 02:24:03.813206 2833 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:24:03.885768 kubelet[2833]: I1213 02:24:03.885732 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-host-proc-sys-net\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.885985 kubelet[2833]: I1213 02:24:03.885788 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-etc-cni-netd\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.885985 kubelet[2833]: I1213 02:24:03.885817 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/291276cd-35d2-4b77-a696-a976076d9880-cilium-ipsec-secrets\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.885985 kubelet[2833]: I1213 02:24:03.885842 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-bpf-maps\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.885985 kubelet[2833]: I1213 02:24:03.885867 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-lib-modules\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.885985 kubelet[2833]: I1213 02:24:03.885922 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-host-proc-sys-kernel\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.885985 kubelet[2833]: I1213 02:24:03.885952 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-hostproc\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.886287 kubelet[2833]: I1213 02:24:03.885981 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/291276cd-35d2-4b77-a696-a976076d9880-clustermesh-secrets\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.886287 kubelet[2833]: I1213 02:24:03.886008 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-cilium-run\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.886287 kubelet[2833]: I1213 02:24:03.886042 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-cni-path\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.886287 kubelet[2833]: I1213 02:24:03.886071 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-xtables-lock\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.886287 kubelet[2833]: I1213 02:24:03.886101 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-cilium-cgroup\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.886287 kubelet[2833]: I1213 02:24:03.886130 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/291276cd-35d2-4b77-a696-a976076d9880-cilium-config-path\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.886529 kubelet[2833]: I1213 02:24:03.886160 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/291276cd-35d2-4b77-a696-a976076d9880-hubble-tls\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.886529 kubelet[2833]: I1213 02:24:03.886195 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rhvpz\" (UniqueName: \"kubernetes.io/projected/291276cd-35d2-4b77-a696-a976076d9880-kube-api-access-rhvpz\") pod \"cilium-zj92k\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " pod="kube-system/cilium-zj92k" Dec 13 02:24:03.920737 sshd[4618]: Accepted publickey for core from 139.178.68.195 port 56732 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:03.922586 sshd[4618]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:03.936356 systemd[1]: Started session-24.scope. Dec 13 02:24:03.936682 systemd-logind[1749]: New session 24 of user core. Dec 13 02:24:04.247797 sshd[4618]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:04.251423 systemd[1]: sshd@23-172.31.30.169:22-139.178.68.195:56732.service: Deactivated successfully. Dec 13 02:24:04.253568 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:24:04.254604 systemd-logind[1749]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:24:04.257208 systemd-logind[1749]: Removed session 24. Dec 13 02:24:04.271889 systemd[1]: Started sshd@24-172.31.30.169:22-139.178.68.195:56740.service. Dec 13 02:24:04.338838 env[1758]: time="2024-12-13T02:24:04.338256725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zj92k,Uid:291276cd-35d2-4b77-a696-a976076d9880,Namespace:kube-system,Attempt:0,}" Dec 13 02:24:04.378002 env[1758]: time="2024-12-13T02:24:04.377868652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:24:04.378214 env[1758]: time="2024-12-13T02:24:04.377955973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:24:04.378214 env[1758]: time="2024-12-13T02:24:04.377987673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:24:04.378356 env[1758]: time="2024-12-13T02:24:04.378225936Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d pid=4645 runtime=io.containerd.runc.v2 Dec 13 02:24:04.433204 env[1758]: time="2024-12-13T02:24:04.433160146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zj92k,Uid:291276cd-35d2-4b77-a696-a976076d9880,Namespace:kube-system,Attempt:0,} returns sandbox id \"1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d\"" Dec 13 02:24:04.438355 env[1758]: time="2024-12-13T02:24:04.438321720Z" level=info msg="CreateContainer within sandbox \"1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:24:04.444977 sshd[4635]: Accepted publickey for core from 139.178.68.195 port 56740 ssh2: RSA SHA256:4KbtXXAWDYYJteZbJp3ZMRrg6Zfz5h3Ah6Q/YaIH9xY Dec 13 02:24:04.445719 sshd[4635]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:24:04.451278 systemd-logind[1749]: New session 25 of user core. Dec 13 02:24:04.452341 systemd[1]: Started session-25.scope. Dec 13 02:24:04.472535 env[1758]: time="2024-12-13T02:24:04.472489256Z" level=info msg="CreateContainer within sandbox \"1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d8cdefa6d60ecef6609835e2165bcb9b0c51de2abef1f57eaf8819bb7d6eeb9e\"" Dec 13 02:24:04.474929 env[1758]: time="2024-12-13T02:24:04.474885886Z" level=info msg="StartContainer for \"d8cdefa6d60ecef6609835e2165bcb9b0c51de2abef1f57eaf8819bb7d6eeb9e\"" Dec 13 02:24:04.552139 env[1758]: time="2024-12-13T02:24:04.548811338Z" level=info msg="StartContainer for \"d8cdefa6d60ecef6609835e2165bcb9b0c51de2abef1f57eaf8819bb7d6eeb9e\" returns successfully" Dec 13 02:24:04.688528 env[1758]: time="2024-12-13T02:24:04.688480067Z" level=info msg="shim disconnected" id=d8cdefa6d60ecef6609835e2165bcb9b0c51de2abef1f57eaf8819bb7d6eeb9e Dec 13 02:24:04.688528 env[1758]: time="2024-12-13T02:24:04.688527043Z" level=warning msg="cleaning up after shim disconnected" id=d8cdefa6d60ecef6609835e2165bcb9b0c51de2abef1f57eaf8819bb7d6eeb9e namespace=k8s.io Dec 13 02:24:04.688836 env[1758]: time="2024-12-13T02:24:04.688604407Z" level=info msg="cleaning up dead shim" Dec 13 02:24:04.697423 env[1758]: time="2024-12-13T02:24:04.697375796Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4733 runtime=io.containerd.runc.v2\n" Dec 13 02:24:05.251174 env[1758]: time="2024-12-13T02:24:05.250355186Z" level=info msg="StopPodSandbox for \"1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d\"" Dec 13 02:24:05.251174 env[1758]: time="2024-12-13T02:24:05.250433236Z" level=info msg="Container to stop \"d8cdefa6d60ecef6609835e2165bcb9b0c51de2abef1f57eaf8819bb7d6eeb9e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:24:05.257696 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d-shm.mount: Deactivated successfully. Dec 13 02:24:05.328320 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d-rootfs.mount: Deactivated successfully. Dec 13 02:24:05.348329 env[1758]: time="2024-12-13T02:24:05.348274921Z" level=info msg="shim disconnected" id=1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d Dec 13 02:24:05.349853 env[1758]: time="2024-12-13T02:24:05.348810620Z" level=warning msg="cleaning up after shim disconnected" id=1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d namespace=k8s.io Dec 13 02:24:05.349853 env[1758]: time="2024-12-13T02:24:05.348839096Z" level=info msg="cleaning up dead shim" Dec 13 02:24:05.360796 env[1758]: time="2024-12-13T02:24:05.360747317Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4767 runtime=io.containerd.runc.v2\n" Dec 13 02:24:05.361116 env[1758]: time="2024-12-13T02:24:05.361082862Z" level=info msg="TearDown network for sandbox \"1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d\" successfully" Dec 13 02:24:05.361208 env[1758]: time="2024-12-13T02:24:05.361114020Z" level=info msg="StopPodSandbox for \"1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d\" returns successfully" Dec 13 02:24:05.499029 kubelet[2833]: I1213 02:24:05.498991 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/291276cd-35d2-4b77-a696-a976076d9880-cilium-ipsec-secrets\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.499029 kubelet[2833]: I1213 02:24:05.499041 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-etc-cni-netd\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.499656 kubelet[2833]: I1213 02:24:05.499077 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/291276cd-35d2-4b77-a696-a976076d9880-clustermesh-secrets\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.499656 kubelet[2833]: I1213 02:24:05.499101 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-xtables-lock\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.499656 kubelet[2833]: I1213 02:24:05.499124 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-cilium-cgroup\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.499656 kubelet[2833]: I1213 02:24:05.499149 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-host-proc-sys-net\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.499656 kubelet[2833]: I1213 02:24:05.499180 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-lib-modules\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.499656 kubelet[2833]: I1213 02:24:05.499208 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-cilium-run\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.500043 kubelet[2833]: I1213 02:24:05.499236 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-bpf-maps\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.500043 kubelet[2833]: I1213 02:24:05.499264 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-host-proc-sys-kernel\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.500043 kubelet[2833]: I1213 02:24:05.499294 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/291276cd-35d2-4b77-a696-a976076d9880-hubble-tls\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.500043 kubelet[2833]: I1213 02:24:05.499327 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rhvpz\" (UniqueName: \"kubernetes.io/projected/291276cd-35d2-4b77-a696-a976076d9880-kube-api-access-rhvpz\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.500043 kubelet[2833]: I1213 02:24:05.499353 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-hostproc\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.500043 kubelet[2833]: I1213 02:24:05.499379 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-cni-path\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.500580 kubelet[2833]: I1213 02:24:05.499408 2833 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/291276cd-35d2-4b77-a696-a976076d9880-cilium-config-path\") pod \"291276cd-35d2-4b77-a696-a976076d9880\" (UID: \"291276cd-35d2-4b77-a696-a976076d9880\") " Dec 13 02:24:05.500580 kubelet[2833]: I1213 02:24:05.499590 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:05.503050 kubelet[2833]: I1213 02:24:05.501677 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:05.503050 kubelet[2833]: I1213 02:24:05.501741 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:05.503050 kubelet[2833]: I1213 02:24:05.501765 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:05.503700 kubelet[2833]: I1213 02:24:05.503667 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:05.504806 kubelet[2833]: I1213 02:24:05.504777 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:05.504908 kubelet[2833]: I1213 02:24:05.504826 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:05.504908 kubelet[2833]: I1213 02:24:05.504851 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:05.506641 kubelet[2833]: I1213 02:24:05.506596 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/291276cd-35d2-4b77-a696-a976076d9880-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:24:05.507362 kubelet[2833]: I1213 02:24:05.507335 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-hostproc" (OuterVolumeSpecName: "hostproc") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:05.507461 kubelet[2833]: I1213 02:24:05.507381 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-cni-path" (OuterVolumeSpecName: "cni-path") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:24:05.520380 systemd[1]: var-lib-kubelet-pods-291276cd\x2d35d2\x2d4b77\x2da696\x2da976076d9880-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:24:05.542957 kubelet[2833]: I1213 02:24:05.541490 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291276cd-35d2-4b77-a696-a976076d9880-kube-api-access-rhvpz" (OuterVolumeSpecName: "kube-api-access-rhvpz") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "kube-api-access-rhvpz". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:05.542957 kubelet[2833]: I1213 02:24:05.541641 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291276cd-35d2-4b77-a696-a976076d9880-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:24:05.542957 kubelet[2833]: I1213 02:24:05.541705 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/291276cd-35d2-4b77-a696-a976076d9880-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:24:05.546818 systemd[1]: var-lib-kubelet-pods-291276cd\x2d35d2\x2d4b77\x2da696\x2da976076d9880-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:24:05.547023 systemd[1]: var-lib-kubelet-pods-291276cd\x2d35d2\x2d4b77\x2da696\x2da976076d9880-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:24:05.547246 kubelet[2833]: I1213 02:24:05.547016 2833 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/291276cd-35d2-4b77-a696-a976076d9880-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "291276cd-35d2-4b77-a696-a976076d9880" (UID: "291276cd-35d2-4b77-a696-a976076d9880"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:24:05.600078 kubelet[2833]: I1213 02:24:05.600044 2833 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-etc-cni-netd\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.600078 kubelet[2833]: I1213 02:24:05.600081 2833 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/291276cd-35d2-4b77-a696-a976076d9880-clustermesh-secrets\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.601159 kubelet[2833]: I1213 02:24:05.600096 2833 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-xtables-lock\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.601159 kubelet[2833]: I1213 02:24:05.600110 2833 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-cilium-cgroup\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.601159 kubelet[2833]: I1213 02:24:05.600130 2833 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-host-proc-sys-net\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.601159 kubelet[2833]: I1213 02:24:05.600215 2833 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-lib-modules\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.601159 kubelet[2833]: I1213 02:24:05.600231 2833 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-cilium-run\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.601159 kubelet[2833]: I1213 02:24:05.600243 2833 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-bpf-maps\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.601159 kubelet[2833]: I1213 02:24:05.600257 2833 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-host-proc-sys-kernel\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.601159 kubelet[2833]: I1213 02:24:05.600270 2833 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/291276cd-35d2-4b77-a696-a976076d9880-hubble-tls\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.605825 kubelet[2833]: I1213 02:24:05.600331 2833 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rhvpz\" (UniqueName: \"kubernetes.io/projected/291276cd-35d2-4b77-a696-a976076d9880-kube-api-access-rhvpz\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.605825 kubelet[2833]: I1213 02:24:05.600348 2833 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-hostproc\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.605825 kubelet[2833]: I1213 02:24:05.600364 2833 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/291276cd-35d2-4b77-a696-a976076d9880-cni-path\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.605825 kubelet[2833]: I1213 02:24:05.600378 2833 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/291276cd-35d2-4b77-a696-a976076d9880-cilium-config-path\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.605825 kubelet[2833]: I1213 02:24:05.600391 2833 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/291276cd-35d2-4b77-a696-a976076d9880-cilium-ipsec-secrets\") on node \"ip-172-31-30-169\" DevicePath \"\"" Dec 13 02:24:05.996365 systemd[1]: var-lib-kubelet-pods-291276cd\x2d35d2\x2d4b77\x2da696\x2da976076d9880-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drhvpz.mount: Deactivated successfully. Dec 13 02:24:06.261223 kubelet[2833]: I1213 02:24:06.261110 2833 scope.go:117] "RemoveContainer" containerID="d8cdefa6d60ecef6609835e2165bcb9b0c51de2abef1f57eaf8819bb7d6eeb9e" Dec 13 02:24:06.266233 env[1758]: time="2024-12-13T02:24:06.266163488Z" level=info msg="RemoveContainer for \"d8cdefa6d60ecef6609835e2165bcb9b0c51de2abef1f57eaf8819bb7d6eeb9e\"" Dec 13 02:24:06.273040 env[1758]: time="2024-12-13T02:24:06.272909998Z" level=info msg="RemoveContainer for \"d8cdefa6d60ecef6609835e2165bcb9b0c51de2abef1f57eaf8819bb7d6eeb9e\" returns successfully" Dec 13 02:24:06.318428 kubelet[2833]: I1213 02:24:06.318388 2833 topology_manager.go:215] "Topology Admit Handler" podUID="49b4f316-d79b-4bf6-ba03-41758f2bf551" podNamespace="kube-system" podName="cilium-6zh76" Dec 13 02:24:06.318643 kubelet[2833]: E1213 02:24:06.318459 2833 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="291276cd-35d2-4b77-a696-a976076d9880" containerName="mount-cgroup" Dec 13 02:24:06.318643 kubelet[2833]: I1213 02:24:06.318488 2833 memory_manager.go:354] "RemoveStaleState removing state" podUID="291276cd-35d2-4b77-a696-a976076d9880" containerName="mount-cgroup" Dec 13 02:24:06.409832 kubelet[2833]: I1213 02:24:06.409785 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/49b4f316-d79b-4bf6-ba03-41758f2bf551-hostproc\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410035 kubelet[2833]: I1213 02:24:06.409843 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49b4f316-d79b-4bf6-ba03-41758f2bf551-lib-modules\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410035 kubelet[2833]: I1213 02:24:06.409872 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/49b4f316-d79b-4bf6-ba03-41758f2bf551-clustermesh-secrets\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410035 kubelet[2833]: I1213 02:24:06.409921 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/49b4f316-d79b-4bf6-ba03-41758f2bf551-hubble-tls\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410035 kubelet[2833]: I1213 02:24:06.409952 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49b4f316-d79b-4bf6-ba03-41758f2bf551-xtables-lock\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410035 kubelet[2833]: I1213 02:24:06.409978 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49b4f316-d79b-4bf6-ba03-41758f2bf551-etc-cni-netd\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410035 kubelet[2833]: I1213 02:24:06.410004 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/49b4f316-d79b-4bf6-ba03-41758f2bf551-cilium-run\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410420 kubelet[2833]: I1213 02:24:06.410029 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/49b4f316-d79b-4bf6-ba03-41758f2bf551-bpf-maps\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410420 kubelet[2833]: I1213 02:24:06.410057 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/49b4f316-d79b-4bf6-ba03-41758f2bf551-host-proc-sys-kernel\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410420 kubelet[2833]: I1213 02:24:06.410088 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/49b4f316-d79b-4bf6-ba03-41758f2bf551-cilium-config-path\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410420 kubelet[2833]: I1213 02:24:06.410118 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/49b4f316-d79b-4bf6-ba03-41758f2bf551-cilium-ipsec-secrets\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410420 kubelet[2833]: I1213 02:24:06.410148 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/49b4f316-d79b-4bf6-ba03-41758f2bf551-host-proc-sys-net\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410674 kubelet[2833]: I1213 02:24:06.410184 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/49b4f316-d79b-4bf6-ba03-41758f2bf551-cilium-cgroup\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410674 kubelet[2833]: I1213 02:24:06.410288 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cwdr\" (UniqueName: \"kubernetes.io/projected/49b4f316-d79b-4bf6-ba03-41758f2bf551-kube-api-access-6cwdr\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.410674 kubelet[2833]: I1213 02:24:06.410328 2833 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/49b4f316-d79b-4bf6-ba03-41758f2bf551-cni-path\") pod \"cilium-6zh76\" (UID: \"49b4f316-d79b-4bf6-ba03-41758f2bf551\") " pod="kube-system/cilium-6zh76" Dec 13 02:24:06.547227 kubelet[2833]: I1213 02:24:06.547202 2833 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="291276cd-35d2-4b77-a696-a976076d9880" path="/var/lib/kubelet/pods/291276cd-35d2-4b77-a696-a976076d9880/volumes" Dec 13 02:24:06.627011 env[1758]: time="2024-12-13T02:24:06.626959489Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6zh76,Uid:49b4f316-d79b-4bf6-ba03-41758f2bf551,Namespace:kube-system,Attempt:0,}" Dec 13 02:24:06.656564 env[1758]: time="2024-12-13T02:24:06.656464449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:24:06.656564 env[1758]: time="2024-12-13T02:24:06.656507662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:24:06.656564 env[1758]: time="2024-12-13T02:24:06.656524295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:24:06.657106 env[1758]: time="2024-12-13T02:24:06.657033808Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8a58d71e9d3862818e14946e886c3f2ec0ef90aae6dbfc4b528a4465c051543 pid=4798 runtime=io.containerd.runc.v2 Dec 13 02:24:06.724522 env[1758]: time="2024-12-13T02:24:06.724475426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6zh76,Uid:49b4f316-d79b-4bf6-ba03-41758f2bf551,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8a58d71e9d3862818e14946e886c3f2ec0ef90aae6dbfc4b528a4465c051543\"" Dec 13 02:24:06.727882 env[1758]: time="2024-12-13T02:24:06.727829817Z" level=info msg="CreateContainer within sandbox \"a8a58d71e9d3862818e14946e886c3f2ec0ef90aae6dbfc4b528a4465c051543\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:24:06.748255 env[1758]: time="2024-12-13T02:24:06.748125648Z" level=info msg="CreateContainer within sandbox \"a8a58d71e9d3862818e14946e886c3f2ec0ef90aae6dbfc4b528a4465c051543\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1a0808bd545230ebe7da60f756b21be4a2426223b79de9d50d4920eb31aa7d4d\"" Dec 13 02:24:06.751075 env[1758]: time="2024-12-13T02:24:06.749018959Z" level=info msg="StartContainer for \"1a0808bd545230ebe7da60f756b21be4a2426223b79de9d50d4920eb31aa7d4d\"" Dec 13 02:24:06.817990 env[1758]: time="2024-12-13T02:24:06.814653830Z" level=info msg="StartContainer for \"1a0808bd545230ebe7da60f756b21be4a2426223b79de9d50d4920eb31aa7d4d\" returns successfully" Dec 13 02:24:06.868512 env[1758]: time="2024-12-13T02:24:06.868467231Z" level=info msg="shim disconnected" id=1a0808bd545230ebe7da60f756b21be4a2426223b79de9d50d4920eb31aa7d4d Dec 13 02:24:06.868512 env[1758]: time="2024-12-13T02:24:06.868514167Z" level=warning msg="cleaning up after shim disconnected" id=1a0808bd545230ebe7da60f756b21be4a2426223b79de9d50d4920eb31aa7d4d namespace=k8s.io Dec 13 02:24:06.868827 env[1758]: time="2024-12-13T02:24:06.868525935Z" level=info msg="cleaning up dead shim" Dec 13 02:24:06.879704 env[1758]: time="2024-12-13T02:24:06.879659582Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4884 runtime=io.containerd.runc.v2\n" Dec 13 02:24:07.268932 env[1758]: time="2024-12-13T02:24:07.268811565Z" level=info msg="CreateContainer within sandbox \"a8a58d71e9d3862818e14946e886c3f2ec0ef90aae6dbfc4b528a4465c051543\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:24:07.302134 env[1758]: time="2024-12-13T02:24:07.302088706Z" level=info msg="CreateContainer within sandbox \"a8a58d71e9d3862818e14946e886c3f2ec0ef90aae6dbfc4b528a4465c051543\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4d3103a68a47aa275f95f2a9eae9cae00427268496d9c5b3937f1e84bb60b0a7\"" Dec 13 02:24:07.304414 env[1758]: time="2024-12-13T02:24:07.303004923Z" level=info msg="StartContainer for \"4d3103a68a47aa275f95f2a9eae9cae00427268496d9c5b3937f1e84bb60b0a7\"" Dec 13 02:24:07.379580 env[1758]: time="2024-12-13T02:24:07.377585926Z" level=info msg="StartContainer for \"4d3103a68a47aa275f95f2a9eae9cae00427268496d9c5b3937f1e84bb60b0a7\" returns successfully" Dec 13 02:24:07.428848 env[1758]: time="2024-12-13T02:24:07.428736914Z" level=info msg="shim disconnected" id=4d3103a68a47aa275f95f2a9eae9cae00427268496d9c5b3937f1e84bb60b0a7 Dec 13 02:24:07.429094 env[1758]: time="2024-12-13T02:24:07.428852370Z" level=warning msg="cleaning up after shim disconnected" id=4d3103a68a47aa275f95f2a9eae9cae00427268496d9c5b3937f1e84bb60b0a7 namespace=k8s.io Dec 13 02:24:07.429094 env[1758]: time="2024-12-13T02:24:07.428866042Z" level=info msg="cleaning up dead shim" Dec 13 02:24:07.438805 env[1758]: time="2024-12-13T02:24:07.438758317Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4949 runtime=io.containerd.runc.v2\n" Dec 13 02:24:07.998237 systemd[1]: run-containerd-runc-k8s.io-4d3103a68a47aa275f95f2a9eae9cae00427268496d9c5b3937f1e84bb60b0a7-runc.YQqiiY.mount: Deactivated successfully. Dec 13 02:24:07.998443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d3103a68a47aa275f95f2a9eae9cae00427268496d9c5b3937f1e84bb60b0a7-rootfs.mount: Deactivated successfully. Dec 13 02:24:08.301102 env[1758]: time="2024-12-13T02:24:08.301046167Z" level=info msg="CreateContainer within sandbox \"a8a58d71e9d3862818e14946e886c3f2ec0ef90aae6dbfc4b528a4465c051543\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:24:08.337416 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617570248.mount: Deactivated successfully. Dec 13 02:24:08.352610 env[1758]: time="2024-12-13T02:24:08.352495491Z" level=info msg="CreateContainer within sandbox \"a8a58d71e9d3862818e14946e886c3f2ec0ef90aae6dbfc4b528a4465c051543\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c445ba960133adb9a5afb0e8e0f5d1799ad56c7d7c455cfa47c452788b4a8b20\"" Dec 13 02:24:08.353260 env[1758]: time="2024-12-13T02:24:08.353224618Z" level=info msg="StartContainer for \"c445ba960133adb9a5afb0e8e0f5d1799ad56c7d7c455cfa47c452788b4a8b20\"" Dec 13 02:24:08.491093 env[1758]: time="2024-12-13T02:24:08.491057761Z" level=info msg="StopPodSandbox for \"135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b\"" Dec 13 02:24:08.491369 env[1758]: time="2024-12-13T02:24:08.491323163Z" level=info msg="TearDown network for sandbox \"135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b\" successfully" Dec 13 02:24:08.491457 env[1758]: time="2024-12-13T02:24:08.491443118Z" level=info msg="StopPodSandbox for \"135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b\" returns successfully" Dec 13 02:24:08.493723 env[1758]: time="2024-12-13T02:24:08.493524154Z" level=info msg="RemovePodSandbox for \"135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b\"" Dec 13 02:24:08.494173 env[1758]: time="2024-12-13T02:24:08.494072371Z" level=info msg="Forcibly stopping sandbox \"135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b\"" Dec 13 02:24:08.494289 env[1758]: time="2024-12-13T02:24:08.494200134Z" level=info msg="TearDown network for sandbox \"135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b\" successfully" Dec 13 02:24:08.506755 env[1758]: time="2024-12-13T02:24:08.506709180Z" level=info msg="RemovePodSandbox \"135b0bfcdaade3685201acfbd1e65fcd5757d15c68d410795f50b9235dca7c0b\" returns successfully" Dec 13 02:24:08.508951 env[1758]: time="2024-12-13T02:24:08.507638615Z" level=info msg="StopPodSandbox for \"1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d\"" Dec 13 02:24:08.508951 env[1758]: time="2024-12-13T02:24:08.507742885Z" level=info msg="TearDown network for sandbox \"1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d\" successfully" Dec 13 02:24:08.508951 env[1758]: time="2024-12-13T02:24:08.507776735Z" level=info msg="StopPodSandbox for \"1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d\" returns successfully" Dec 13 02:24:08.508951 env[1758]: time="2024-12-13T02:24:08.508240258Z" level=info msg="RemovePodSandbox for \"1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d\"" Dec 13 02:24:08.508951 env[1758]: time="2024-12-13T02:24:08.508262442Z" level=info msg="Forcibly stopping sandbox \"1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d\"" Dec 13 02:24:08.508951 env[1758]: time="2024-12-13T02:24:08.508324033Z" level=info msg="TearDown network for sandbox \"1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d\" successfully" Dec 13 02:24:08.510525 env[1758]: time="2024-12-13T02:24:08.510491688Z" level=info msg="StartContainer for \"c445ba960133adb9a5afb0e8e0f5d1799ad56c7d7c455cfa47c452788b4a8b20\" returns successfully" Dec 13 02:24:08.528379 env[1758]: time="2024-12-13T02:24:08.524868940Z" level=info msg="RemovePodSandbox \"1b5925b1d6de40dbcf22d6b4c21cdebce48b3c96f0b666db5ba9586644a83b2d\" returns successfully" Dec 13 02:24:08.529306 env[1758]: time="2024-12-13T02:24:08.528959774Z" level=info msg="StopPodSandbox for \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\"" Dec 13 02:24:08.530070 env[1758]: time="2024-12-13T02:24:08.529361404Z" level=info msg="TearDown network for sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" successfully" Dec 13 02:24:08.530415 env[1758]: time="2024-12-13T02:24:08.530073284Z" level=info msg="StopPodSandbox for \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" returns successfully" Dec 13 02:24:08.547137 env[1758]: time="2024-12-13T02:24:08.547090492Z" level=info msg="RemovePodSandbox for \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\"" Dec 13 02:24:08.547432 env[1758]: time="2024-12-13T02:24:08.547351761Z" level=info msg="Forcibly stopping sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\"" Dec 13 02:24:08.547631 env[1758]: time="2024-12-13T02:24:08.547607170Z" level=info msg="TearDown network for sandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" successfully" Dec 13 02:24:08.555643 env[1758]: time="2024-12-13T02:24:08.554890749Z" level=info msg="RemovePodSandbox \"3f09a3241495d1131f018e5ff3e804d012c2b431d861b67e3dae3f667d766af1\" returns successfully" Dec 13 02:24:08.639907 env[1758]: time="2024-12-13T02:24:08.639809930Z" level=info msg="shim disconnected" id=c445ba960133adb9a5afb0e8e0f5d1799ad56c7d7c455cfa47c452788b4a8b20 Dec 13 02:24:08.639907 env[1758]: time="2024-12-13T02:24:08.639886085Z" level=warning msg="cleaning up after shim disconnected" id=c445ba960133adb9a5afb0e8e0f5d1799ad56c7d7c455cfa47c452788b4a8b20 namespace=k8s.io Dec 13 02:24:08.639907 env[1758]: time="2024-12-13T02:24:08.639902177Z" level=info msg="cleaning up dead shim" Dec 13 02:24:08.652443 env[1758]: time="2024-12-13T02:24:08.652385326Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5008 runtime=io.containerd.runc.v2\n" Dec 13 02:24:08.814891 kubelet[2833]: E1213 02:24:08.814751 2833 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:24:09.002107 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c445ba960133adb9a5afb0e8e0f5d1799ad56c7d7c455cfa47c452788b4a8b20-rootfs.mount: Deactivated successfully. Dec 13 02:24:09.279260 env[1758]: time="2024-12-13T02:24:09.278852021Z" level=info msg="CreateContainer within sandbox \"a8a58d71e9d3862818e14946e886c3f2ec0ef90aae6dbfc4b528a4465c051543\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:24:09.306556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3810128270.mount: Deactivated successfully. Dec 13 02:24:09.321500 env[1758]: time="2024-12-13T02:24:09.321450223Z" level=info msg="CreateContainer within sandbox \"a8a58d71e9d3862818e14946e886c3f2ec0ef90aae6dbfc4b528a4465c051543\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7a329fa0eb2cb04b6c878e5a900bd8fd6b65337b17e8838973e320635dadf120\"" Dec 13 02:24:09.323836 env[1758]: time="2024-12-13T02:24:09.323802331Z" level=info msg="StartContainer for \"7a329fa0eb2cb04b6c878e5a900bd8fd6b65337b17e8838973e320635dadf120\"" Dec 13 02:24:09.406720 env[1758]: time="2024-12-13T02:24:09.406618509Z" level=info msg="StartContainer for \"7a329fa0eb2cb04b6c878e5a900bd8fd6b65337b17e8838973e320635dadf120\" returns successfully" Dec 13 02:24:09.456660 env[1758]: time="2024-12-13T02:24:09.456600863Z" level=info msg="shim disconnected" id=7a329fa0eb2cb04b6c878e5a900bd8fd6b65337b17e8838973e320635dadf120 Dec 13 02:24:09.456660 env[1758]: time="2024-12-13T02:24:09.456650040Z" level=warning msg="cleaning up after shim disconnected" id=7a329fa0eb2cb04b6c878e5a900bd8fd6b65337b17e8838973e320635dadf120 namespace=k8s.io Dec 13 02:24:09.456660 env[1758]: time="2024-12-13T02:24:09.456663161Z" level=info msg="cleaning up dead shim" Dec 13 02:24:09.466952 env[1758]: time="2024-12-13T02:24:09.466907235Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:24:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5066 runtime=io.containerd.runc.v2\n" Dec 13 02:24:10.000149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7a329fa0eb2cb04b6c878e5a900bd8fd6b65337b17e8838973e320635dadf120-rootfs.mount: Deactivated successfully. Dec 13 02:24:10.289981 env[1758]: time="2024-12-13T02:24:10.289768989Z" level=info msg="CreateContainer within sandbox \"a8a58d71e9d3862818e14946e886c3f2ec0ef90aae6dbfc4b528a4465c051543\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:24:10.340525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2226322807.mount: Deactivated successfully. Dec 13 02:24:10.372537 env[1758]: time="2024-12-13T02:24:10.372436691Z" level=info msg="CreateContainer within sandbox \"a8a58d71e9d3862818e14946e886c3f2ec0ef90aae6dbfc4b528a4465c051543\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4d8632f147208c2d7f5c3f9c61bdeb7dd9f684de49fec68a62bc64000240499f\"" Dec 13 02:24:10.376009 env[1758]: time="2024-12-13T02:24:10.375100075Z" level=info msg="StartContainer for \"4d8632f147208c2d7f5c3f9c61bdeb7dd9f684de49fec68a62bc64000240499f\"" Dec 13 02:24:10.469960 env[1758]: time="2024-12-13T02:24:10.469867178Z" level=info msg="StartContainer for \"4d8632f147208c2d7f5c3f9c61bdeb7dd9f684de49fec68a62bc64000240499f\" returns successfully" Dec 13 02:24:10.799712 kubelet[2833]: I1213 02:24:10.799408 2833 setters.go:568] "Node became not ready" node="ip-172-31-30-169" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:24:10Z","lastTransitionTime":"2024-12-13T02:24:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:24:11.324082 kubelet[2833]: I1213 02:24:11.324044 2833 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6zh76" podStartSLOduration=5.3239893 podStartE2EDuration="5.3239893s" podCreationTimestamp="2024-12-13 02:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:24:11.321074364 +0000 UTC m=+123.151048894" watchObservedRunningTime="2024-12-13 02:24:11.3239893 +0000 UTC m=+123.153963828" Dec 13 02:24:11.519570 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:24:15.109061 systemd-networkd[1439]: lxc_health: Link UP Dec 13 02:24:15.125698 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:24:15.125994 systemd-networkd[1439]: lxc_health: Gained carrier Dec 13 02:24:15.140519 (udev-worker)[5633]: Network interface NamePolicy= disabled on kernel command line. Dec 13 02:24:16.544786 systemd-networkd[1439]: lxc_health: Gained IPv6LL Dec 13 02:24:17.847278 systemd[1]: run-containerd-runc-k8s.io-4d8632f147208c2d7f5c3f9c61bdeb7dd9f684de49fec68a62bc64000240499f-runc.AIsIEP.mount: Deactivated successfully. Dec 13 02:24:20.130092 systemd[1]: run-containerd-runc-k8s.io-4d8632f147208c2d7f5c3f9c61bdeb7dd9f684de49fec68a62bc64000240499f-runc.8PzX93.mount: Deactivated successfully. Dec 13 02:24:22.502777 systemd[1]: run-containerd-runc-k8s.io-4d8632f147208c2d7f5c3f9c61bdeb7dd9f684de49fec68a62bc64000240499f-runc.o9T89r.mount: Deactivated successfully. Dec 13 02:24:22.705152 kubelet[2833]: E1213 02:24:22.705064 2833 upgradeaware.go:425] Error proxying data from client to backend: readfrom tcp 127.0.0.1:54358->127.0.0.1:46197: write tcp 127.0.0.1:54358->127.0.0.1:46197: write: broken pipe Dec 13 02:24:22.729767 sshd[4635]: pam_unix(sshd:session): session closed for user core Dec 13 02:24:22.736145 systemd[1]: sshd@24-172.31.30.169:22-139.178.68.195:56740.service: Deactivated successfully. Dec 13 02:24:22.738151 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:24:22.739279 systemd-logind[1749]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:24:22.741951 systemd-logind[1749]: Removed session 25.