Dec 13 14:32:20.037982 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Dec 13 12:55:10 -00 2024 Dec 13 14:32:20.038003 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:32:20.038013 kernel: BIOS-provided physical RAM map: Dec 13 14:32:20.038019 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 14:32:20.038025 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 14:32:20.038031 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 14:32:20.038040 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007d9e9fff] usable Dec 13 14:32:20.038046 kernel: BIOS-e820: [mem 0x000000007d9ea000-0x000000007fffffff] reserved Dec 13 14:32:20.038053 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000e03fffff] reserved Dec 13 14:32:20.038059 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 14:32:20.038065 kernel: NX (Execute Disable) protection: active Dec 13 14:32:20.038071 kernel: SMBIOS 2.7 present. Dec 13 14:32:20.038077 kernel: DMI: Amazon EC2 t3.small/, BIOS 1.0 10/16/2017 Dec 13 14:32:20.038084 kernel: Hypervisor detected: KVM Dec 13 14:32:20.038094 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 14:32:20.038101 kernel: kvm-clock: cpu 0, msr d19a001, primary cpu clock Dec 13 14:32:20.038108 kernel: kvm-clock: using sched offset of 7021933381 cycles Dec 13 14:32:20.038115 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 14:32:20.038255 kernel: tsc: Detected 2499.996 MHz processor Dec 13 14:32:20.038267 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 14:32:20.038311 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 14:32:20.038320 kernel: last_pfn = 0x7d9ea max_arch_pfn = 0x400000000 Dec 13 14:32:20.038327 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 14:32:20.038334 kernel: Using GB pages for direct mapping Dec 13 14:32:20.038341 kernel: ACPI: Early table checksum verification disabled Dec 13 14:32:20.038348 kernel: ACPI: RSDP 0x00000000000F8F40 000014 (v00 AMAZON) Dec 13 14:32:20.038355 kernel: ACPI: RSDT 0x000000007D9EE350 000044 (v01 AMAZON AMZNRSDT 00000001 AMZN 00000001) Dec 13 14:32:20.038362 kernel: ACPI: FACP 0x000000007D9EFF80 000074 (v01 AMAZON AMZNFACP 00000001 AMZN 00000001) Dec 13 14:32:20.038369 kernel: ACPI: DSDT 0x000000007D9EE3A0 0010E9 (v01 AMAZON AMZNDSDT 00000001 AMZN 00000001) Dec 13 14:32:20.038378 kernel: ACPI: FACS 0x000000007D9EFF40 000040 Dec 13 14:32:20.038385 kernel: ACPI: SSDT 0x000000007D9EF6C0 00087A (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:32:20.038392 kernel: ACPI: APIC 0x000000007D9EF5D0 000076 (v01 AMAZON AMZNAPIC 00000001 AMZN 00000001) Dec 13 14:32:20.038398 kernel: ACPI: SRAT 0x000000007D9EF530 0000A0 (v01 AMAZON AMZNSRAT 00000001 AMZN 00000001) Dec 13 14:32:20.038405 kernel: ACPI: SLIT 0x000000007D9EF4C0 00006C (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Dec 13 14:32:20.038412 kernel: ACPI: WAET 0x000000007D9EF490 000028 (v01 AMAZON AMZNWAET 00000001 AMZN 00000001) Dec 13 14:32:20.038419 kernel: ACPI: HPET 0x00000000000C9000 000038 (v01 AMAZON AMZNHPET 00000001 AMZN 00000001) Dec 13 14:32:20.038425 kernel: ACPI: SSDT 0x00000000000C9040 00007B (v01 AMAZON AMZNSSDT 00000001 AMZN 00000001) Dec 13 14:32:20.038435 kernel: ACPI: Reserving FACP table memory at [mem 0x7d9eff80-0x7d9efff3] Dec 13 14:32:20.038442 kernel: ACPI: Reserving DSDT table memory at [mem 0x7d9ee3a0-0x7d9ef488] Dec 13 14:32:20.038449 kernel: ACPI: Reserving FACS table memory at [mem 0x7d9eff40-0x7d9eff7f] Dec 13 14:32:20.038459 kernel: ACPI: Reserving SSDT table memory at [mem 0x7d9ef6c0-0x7d9eff39] Dec 13 14:32:20.038466 kernel: ACPI: Reserving APIC table memory at [mem 0x7d9ef5d0-0x7d9ef645] Dec 13 14:32:20.038473 kernel: ACPI: Reserving SRAT table memory at [mem 0x7d9ef530-0x7d9ef5cf] Dec 13 14:32:20.038481 kernel: ACPI: Reserving SLIT table memory at [mem 0x7d9ef4c0-0x7d9ef52b] Dec 13 14:32:20.038490 kernel: ACPI: Reserving WAET table memory at [mem 0x7d9ef490-0x7d9ef4b7] Dec 13 14:32:20.038498 kernel: ACPI: Reserving HPET table memory at [mem 0xc9000-0xc9037] Dec 13 14:32:20.038505 kernel: ACPI: Reserving SSDT table memory at [mem 0xc9040-0xc90ba] Dec 13 14:32:20.038512 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 14:32:20.038520 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 14:32:20.038527 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x7fffffff] Dec 13 14:32:20.038534 kernel: NUMA: Initialized distance table, cnt=1 Dec 13 14:32:20.038541 kernel: NODE_DATA(0) allocated [mem 0x7d9e3000-0x7d9e8fff] Dec 13 14:32:20.038551 kernel: Zone ranges: Dec 13 14:32:20.038558 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 14:32:20.038566 kernel: DMA32 [mem 0x0000000001000000-0x000000007d9e9fff] Dec 13 14:32:20.038573 kernel: Normal empty Dec 13 14:32:20.038581 kernel: Movable zone start for each node Dec 13 14:32:20.038588 kernel: Early memory node ranges Dec 13 14:32:20.038595 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 14:32:20.038603 kernel: node 0: [mem 0x0000000000100000-0x000000007d9e9fff] Dec 13 14:32:20.038610 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007d9e9fff] Dec 13 14:32:20.038620 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 14:32:20.038627 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 14:32:20.038635 kernel: On node 0, zone DMA32: 9750 pages in unavailable ranges Dec 13 14:32:20.038642 kernel: ACPI: PM-Timer IO Port: 0xb008 Dec 13 14:32:20.038649 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 14:32:20.038657 kernel: IOAPIC[0]: apic_id 0, version 32, address 0xfec00000, GSI 0-23 Dec 13 14:32:20.038664 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 14:32:20.038671 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 14:32:20.038679 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 14:32:20.038688 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 14:32:20.038696 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 14:32:20.038703 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 14:32:20.038710 kernel: TSC deadline timer available Dec 13 14:32:20.038732 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 14:32:20.038740 kernel: [mem 0x80000000-0xdfffffff] available for PCI devices Dec 13 14:32:20.038748 kernel: Booting paravirtualized kernel on KVM Dec 13 14:32:20.038755 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 14:32:20.038763 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Dec 13 14:32:20.038773 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Dec 13 14:32:20.038780 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Dec 13 14:32:20.038788 kernel: pcpu-alloc: [0] 0 1 Dec 13 14:32:20.038795 kernel: kvm-guest: stealtime: cpu 0, msr 7b61c0c0 Dec 13 14:32:20.038802 kernel: kvm-guest: PV spinlocks enabled Dec 13 14:32:20.038810 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 14:32:20.038817 kernel: Built 1 zonelists, mobility grouping on. Total pages: 506242 Dec 13 14:32:20.038824 kernel: Policy zone: DMA32 Dec 13 14:32:20.038833 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:32:20.038843 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:32:20.038850 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:32:20.038858 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 14:32:20.038865 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:32:20.038873 kernel: Memory: 1934420K/2057760K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47472K init, 4112K bss, 123080K reserved, 0K cma-reserved) Dec 13 14:32:20.038881 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 14:32:20.038888 kernel: Kernel/User page tables isolation: enabled Dec 13 14:32:20.038895 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 14:32:20.038905 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 14:32:20.038912 kernel: rcu: Hierarchical RCU implementation. Dec 13 14:32:20.038920 kernel: rcu: RCU event tracing is enabled. Dec 13 14:32:20.038928 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 14:32:20.038935 kernel: Rude variant of Tasks RCU enabled. Dec 13 14:32:20.038942 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:32:20.038950 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:32:20.038957 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 14:32:20.038965 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Dec 13 14:32:20.038974 kernel: random: crng init done Dec 13 14:32:20.038981 kernel: Console: colour VGA+ 80x25 Dec 13 14:32:20.038989 kernel: printk: console [ttyS0] enabled Dec 13 14:32:20.038996 kernel: ACPI: Core revision 20210730 Dec 13 14:32:20.039004 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 30580167144 ns Dec 13 14:32:20.039011 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 14:32:20.039019 kernel: x2apic enabled Dec 13 14:32:20.039026 kernel: Switched APIC routing to physical x2apic. Dec 13 14:32:20.039034 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 14:32:20.039043 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499996) Dec 13 14:32:20.039051 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 14:32:20.039058 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 14:32:20.039066 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 14:32:20.039142 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 14:32:20.039154 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 14:32:20.039162 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 14:32:20.039171 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 14:32:20.039179 kernel: RETBleed: Vulnerable Dec 13 14:32:20.039186 kernel: Speculative Store Bypass: Vulnerable Dec 13 14:32:20.039194 kernel: MDS: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:32:20.039202 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 14:32:20.039210 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 14:32:20.039217 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 14:32:20.039227 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 14:32:20.039235 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 14:32:20.039243 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 14:32:20.039251 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 14:32:20.039259 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 14:32:20.039268 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 14:32:20.039276 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 14:32:20.039284 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Dec 13 14:32:20.039292 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 14:32:20.039300 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 14:32:20.039307 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 14:32:20.039315 kernel: x86/fpu: xstate_offset[5]: 960, xstate_sizes[5]: 64 Dec 13 14:32:20.039323 kernel: x86/fpu: xstate_offset[6]: 1024, xstate_sizes[6]: 512 Dec 13 14:32:20.039331 kernel: x86/fpu: xstate_offset[7]: 1536, xstate_sizes[7]: 1024 Dec 13 14:32:20.039338 kernel: x86/fpu: xstate_offset[9]: 2560, xstate_sizes[9]: 8 Dec 13 14:32:20.039346 kernel: x86/fpu: Enabled xstate features 0x2ff, context size is 2568 bytes, using 'compacted' format. Dec 13 14:32:20.039354 kernel: Freeing SMP alternatives memory: 32K Dec 13 14:32:20.039363 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:32:20.039381 kernel: LSM: Security Framework initializing Dec 13 14:32:20.039393 kernel: SELinux: Initializing. Dec 13 14:32:20.039401 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:32:20.039409 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 14:32:20.039417 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 14:32:20.039425 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 14:32:20.039433 kernel: signal: max sigframe size: 3632 Dec 13 14:32:20.039441 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:32:20.039449 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 14:32:20.039459 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:32:20.039467 kernel: x86: Booting SMP configuration: Dec 13 14:32:20.039475 kernel: .... node #0, CPUs: #1 Dec 13 14:32:20.039483 kernel: kvm-clock: cpu 1, msr d19a041, secondary cpu clock Dec 13 14:32:20.039491 kernel: kvm-guest: stealtime: cpu 1, msr 7b71c0c0 Dec 13 14:32:20.039499 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Dec 13 14:32:20.039508 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 14:32:20.039516 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 14:32:20.039524 kernel: smpboot: Max logical packages: 1 Dec 13 14:32:20.039534 kernel: smpboot: Total of 2 processors activated (9999.98 BogoMIPS) Dec 13 14:32:20.039542 kernel: devtmpfs: initialized Dec 13 14:32:20.039550 kernel: x86/mm: Memory block size: 128MB Dec 13 14:32:20.039558 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:32:20.039566 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 14:32:20.039574 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:32:20.039582 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:32:20.039589 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:32:20.039597 kernel: audit: type=2000 audit(1734100339.553:1): state=initialized audit_enabled=0 res=1 Dec 13 14:32:20.039607 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:32:20.039615 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 14:32:20.039622 kernel: cpuidle: using governor menu Dec 13 14:32:20.039631 kernel: ACPI: bus type PCI registered Dec 13 14:32:20.039639 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:32:20.039646 kernel: dca service started, version 1.12.1 Dec 13 14:32:20.039654 kernel: PCI: Using configuration type 1 for base access Dec 13 14:32:20.039662 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 14:32:20.039670 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:32:20.039680 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:32:20.039688 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:32:20.039696 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:32:20.039752 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:32:20.039762 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:32:20.039770 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:32:20.039778 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:32:20.039786 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:32:20.039794 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Dec 13 14:32:20.039804 kernel: ACPI: Interpreter enabled Dec 13 14:32:20.039812 kernel: ACPI: PM: (supports S0 S5) Dec 13 14:32:20.039820 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 14:32:20.039828 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 14:32:20.039836 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Dec 13 14:32:20.039844 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:32:20.039988 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:32:20.040076 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Dec 13 14:32:20.040089 kernel: acpiphp: Slot [3] registered Dec 13 14:32:20.040098 kernel: acpiphp: Slot [4] registered Dec 13 14:32:20.040106 kernel: acpiphp: Slot [5] registered Dec 13 14:32:20.040113 kernel: acpiphp: Slot [6] registered Dec 13 14:32:20.040121 kernel: acpiphp: Slot [7] registered Dec 13 14:32:20.040129 kernel: acpiphp: Slot [8] registered Dec 13 14:32:20.040137 kernel: acpiphp: Slot [9] registered Dec 13 14:32:20.040145 kernel: acpiphp: Slot [10] registered Dec 13 14:32:20.040153 kernel: acpiphp: Slot [11] registered Dec 13 14:32:20.040163 kernel: acpiphp: Slot [12] registered Dec 13 14:32:20.040171 kernel: acpiphp: Slot [13] registered Dec 13 14:32:20.040179 kernel: acpiphp: Slot [14] registered Dec 13 14:32:20.040187 kernel: acpiphp: Slot [15] registered Dec 13 14:32:20.040471 kernel: acpiphp: Slot [16] registered Dec 13 14:32:20.040484 kernel: acpiphp: Slot [17] registered Dec 13 14:32:20.040493 kernel: acpiphp: Slot [18] registered Dec 13 14:32:20.040501 kernel: acpiphp: Slot [19] registered Dec 13 14:32:20.041343 kernel: acpiphp: Slot [20] registered Dec 13 14:32:20.041357 kernel: acpiphp: Slot [21] registered Dec 13 14:32:20.041366 kernel: acpiphp: Slot [22] registered Dec 13 14:32:20.041374 kernel: acpiphp: Slot [23] registered Dec 13 14:32:20.041382 kernel: acpiphp: Slot [24] registered Dec 13 14:32:20.041390 kernel: acpiphp: Slot [25] registered Dec 13 14:32:20.041398 kernel: acpiphp: Slot [26] registered Dec 13 14:32:20.041406 kernel: acpiphp: Slot [27] registered Dec 13 14:32:20.041414 kernel: acpiphp: Slot [28] registered Dec 13 14:32:20.041422 kernel: acpiphp: Slot [29] registered Dec 13 14:32:20.041430 kernel: acpiphp: Slot [30] registered Dec 13 14:32:20.041440 kernel: acpiphp: Slot [31] registered Dec 13 14:32:20.041448 kernel: PCI host bridge to bus 0000:00 Dec 13 14:32:20.041588 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 14:32:20.041670 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 14:32:20.041756 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 14:32:20.042447 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Dec 13 14:32:20.042607 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:32:20.042711 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Dec 13 14:32:20.042834 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Dec 13 14:32:20.042924 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x000000 Dec 13 14:32:20.043006 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Dec 13 14:32:20.043436 kernel: pci 0000:00:01.3: quirk: [io 0xb100-0xb10f] claimed by PIIX4 SMB Dec 13 14:32:20.043580 kernel: pci 0000:00:01.3: PIIX4 devres E PIO at fff0-ffff Dec 13 14:32:20.043781 kernel: pci 0000:00:01.3: PIIX4 devres F MMIO at ffc00000-ffffffff Dec 13 14:32:20.043925 kernel: pci 0000:00:01.3: PIIX4 devres G PIO at fff0-ffff Dec 13 14:32:20.044054 kernel: pci 0000:00:01.3: PIIX4 devres H MMIO at ffc00000-ffffffff Dec 13 14:32:20.044182 kernel: pci 0000:00:01.3: PIIX4 devres I PIO at fff0-ffff Dec 13 14:32:20.044306 kernel: pci 0000:00:01.3: PIIX4 devres J PIO at fff0-ffff Dec 13 14:32:20.044444 kernel: pci 0000:00:03.0: [1d0f:1111] type 00 class 0x030000 Dec 13 14:32:20.044573 kernel: pci 0000:00:03.0: reg 0x10: [mem 0xfe400000-0xfe7fffff pref] Dec 13 14:32:20.044701 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Dec 13 14:32:20.044848 kernel: pci 0000:00:03.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 14:32:20.045052 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Dec 13 14:32:20.045185 kernel: pci 0000:00:04.0: reg 0x10: [mem 0xfebf0000-0xfebf3fff] Dec 13 14:32:20.045323 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Dec 13 14:32:20.045451 kernel: pci 0000:00:05.0: reg 0x10: [mem 0xfebf4000-0xfebf7fff] Dec 13 14:32:20.045472 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 14:32:20.045491 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 14:32:20.045506 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 14:32:20.045522 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 14:32:20.045537 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Dec 13 14:32:20.045552 kernel: iommu: Default domain type: Translated Dec 13 14:32:20.045567 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 14:32:20.045769 kernel: pci 0000:00:03.0: vgaarb: setting as boot VGA device Dec 13 14:32:20.045953 kernel: pci 0000:00:03.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 14:32:20.046245 kernel: pci 0000:00:03.0: vgaarb: bridge control possible Dec 13 14:32:20.046273 kernel: vgaarb: loaded Dec 13 14:32:20.046287 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:32:20.046300 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:32:20.046313 kernel: PTP clock support registered Dec 13 14:32:20.046326 kernel: PCI: Using ACPI for IRQ routing Dec 13 14:32:20.046340 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 14:32:20.046353 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 14:32:20.046367 kernel: e820: reserve RAM buffer [mem 0x7d9ea000-0x7fffffff] Dec 13 14:32:20.046383 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0 Dec 13 14:32:20.046398 kernel: hpet0: 8 comparators, 32-bit 62.500000 MHz counter Dec 13 14:32:20.046410 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 14:32:20.046424 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:32:20.046439 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:32:20.046453 kernel: pnp: PnP ACPI init Dec 13 14:32:20.046467 kernel: pnp: PnP ACPI: found 5 devices Dec 13 14:32:20.046481 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 14:32:20.046494 kernel: NET: Registered PF_INET protocol family Dec 13 14:32:20.046512 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:32:20.046527 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 14:32:20.046541 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:32:20.046554 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 14:32:20.046644 kernel: TCP bind hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 14:32:20.046663 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 14:32:20.046676 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:32:20.046688 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 14:32:20.046701 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:32:20.046727 kernel: NET: Registered PF_XDP protocol family Dec 13 14:32:20.046858 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 14:32:20.046980 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 14:32:20.047177 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 14:32:20.047306 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Dec 13 14:32:20.047476 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Dec 13 14:32:20.047609 kernel: pci 0000:00:01.0: Activating ISA DMA hang workarounds Dec 13 14:32:20.047633 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:32:20.047648 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 14:32:20.047664 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x24093623c91, max_idle_ns: 440795291220 ns Dec 13 14:32:20.047680 kernel: clocksource: Switched to clocksource tsc Dec 13 14:32:20.047695 kernel: Initialise system trusted keyrings Dec 13 14:32:20.047781 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 14:32:20.047798 kernel: Key type asymmetric registered Dec 13 14:32:20.047812 kernel: Asymmetric key parser 'x509' registered Dec 13 14:32:20.047827 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:32:20.047845 kernel: io scheduler mq-deadline registered Dec 13 14:32:20.047861 kernel: io scheduler kyber registered Dec 13 14:32:20.047876 kernel: io scheduler bfq registered Dec 13 14:32:20.047891 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 14:32:20.047906 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:32:20.047920 kernel: 00:04: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 14:32:20.047935 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 14:32:20.047951 kernel: i8042: Warning: Keylock active Dec 13 14:32:20.047966 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 14:32:20.047983 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 14:32:20.048194 kernel: rtc_cmos 00:00: RTC can wake from S4 Dec 13 14:32:20.048417 kernel: rtc_cmos 00:00: registered as rtc0 Dec 13 14:32:20.048540 kernel: rtc_cmos 00:00: setting system clock to 2024-12-13T14:32:19 UTC (1734100339) Dec 13 14:32:20.048711 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Dec 13 14:32:20.048749 kernel: intel_pstate: CPU model not supported Dec 13 14:32:20.048763 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:32:20.048778 kernel: Segment Routing with IPv6 Dec 13 14:32:20.048796 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:32:20.048810 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:32:20.048822 kernel: Key type dns_resolver registered Dec 13 14:32:20.048836 kernel: IPI shorthand broadcast: enabled Dec 13 14:32:20.048850 kernel: sched_clock: Marking stable (396679811, 274422055)->(785849463, -114747597) Dec 13 14:32:20.048865 kernel: registered taskstats version 1 Dec 13 14:32:20.048877 kernel: Loading compiled-in X.509 certificates Dec 13 14:32:20.048888 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e1d88c9e01f5bb2adeb5b99325e46e5ca8dff115' Dec 13 14:32:20.048900 kernel: Key type .fscrypt registered Dec 13 14:32:20.048916 kernel: Key type fscrypt-provisioning registered Dec 13 14:32:20.048928 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:32:20.048941 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:32:20.048954 kernel: ima: No architecture policies found Dec 13 14:32:20.048966 kernel: clk: Disabling unused clocks Dec 13 14:32:20.048979 kernel: Freeing unused kernel image (initmem) memory: 47472K Dec 13 14:32:20.048992 kernel: Write protecting the kernel read-only data: 28672k Dec 13 14:32:20.049004 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 14:32:20.049017 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 14:32:20.049032 kernel: Run /init as init process Dec 13 14:32:20.049044 kernel: with arguments: Dec 13 14:32:20.049057 kernel: /init Dec 13 14:32:20.049069 kernel: with environment: Dec 13 14:32:20.049082 kernel: HOME=/ Dec 13 14:32:20.049094 kernel: TERM=linux Dec 13 14:32:20.049106 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:32:20.049123 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:32:20.049141 systemd[1]: Detected virtualization amazon. Dec 13 14:32:20.049155 systemd[1]: Detected architecture x86-64. Dec 13 14:32:20.049168 systemd[1]: Running in initrd. Dec 13 14:32:20.049181 systemd[1]: No hostname configured, using default hostname. Dec 13 14:32:20.049392 systemd[1]: Hostname set to . Dec 13 14:32:20.049415 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:32:20.049429 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:32:20.049443 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:32:20.049457 systemd[1]: Reached target cryptsetup.target. Dec 13 14:32:20.049471 systemd[1]: Reached target paths.target. Dec 13 14:32:20.049485 systemd[1]: Reached target slices.target. Dec 13 14:32:20.049499 systemd[1]: Reached target swap.target. Dec 13 14:32:20.049512 systemd[1]: Reached target timers.target. Dec 13 14:32:20.049530 systemd[1]: Listening on iscsid.socket. Dec 13 14:32:20.049544 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:32:20.049592 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:32:20.049607 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:32:20.049621 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:32:20.049635 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:32:20.049686 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:32:20.049706 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:32:20.049753 systemd[1]: Reached target sockets.target. Dec 13 14:32:20.049770 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 14:32:20.049784 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:32:20.049798 systemd[1]: Finished network-cleanup.service. Dec 13 14:32:20.049812 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:32:20.049825 systemd[1]: Starting systemd-journald.service... Dec 13 14:32:20.049893 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:32:20.049910 systemd[1]: Starting systemd-resolved.service... Dec 13 14:32:20.050113 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:32:20.050139 systemd-journald[185]: Journal started Dec 13 14:32:20.050219 systemd-journald[185]: Runtime Journal (/run/log/journal/ec2984d379b57a63129f77e4e559f760) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:32:20.083765 systemd[1]: Started systemd-journald.service. Dec 13 14:32:20.061034 systemd-modules-load[186]: Inserted module 'overlay' Dec 13 14:32:20.257412 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:32:20.257449 kernel: Bridge firewalling registered Dec 13 14:32:20.257468 kernel: SCSI subsystem initialized Dec 13 14:32:20.257484 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:32:20.257505 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:32:20.257523 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:32:20.123776 systemd-modules-load[186]: Inserted module 'br_netfilter' Dec 13 14:32:20.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.152878 systemd-resolved[187]: Positive Trust Anchors: Dec 13 14:32:20.152890 systemd-resolved[187]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:32:20.265499 kernel: audit: type=1130 audit(1734100340.255:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.152943 systemd-resolved[187]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:32:20.279862 kernel: audit: type=1130 audit(1734100340.263:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.167018 systemd-resolved[187]: Defaulting to hostname 'linux'. Dec 13 14:32:20.176803 systemd-modules-load[186]: Inserted module 'dm_multipath' Dec 13 14:32:20.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.287068 kernel: audit: type=1130 audit(1734100340.280:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.257853 systemd[1]: Started systemd-resolved.service. Dec 13 14:32:20.292232 kernel: audit: type=1130 audit(1734100340.285:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.280994 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:32:20.298287 kernel: audit: type=1130 audit(1734100340.290:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.282132 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:32:20.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.287268 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:32:20.292442 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:32:20.305454 kernel: audit: type=1130 audit(1734100340.296:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.298528 systemd[1]: Reached target nss-lookup.target. Dec 13 14:32:20.307101 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:32:20.308578 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:32:20.311283 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:32:20.333870 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:32:20.335882 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:32:20.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.342757 kernel: audit: type=1130 audit(1734100340.333:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.342761 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:32:20.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.348788 kernel: audit: type=1130 audit(1734100340.340:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.349671 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:32:20.355293 kernel: audit: type=1130 audit(1734100340.346:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.362254 dracut-cmdline[208]: dracut-dracut-053 Dec 13 14:32:20.366735 dracut-cmdline[208]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=8c474c3ec361ec863adbecaa85281a726e1b53f7863ecc4742be8c5f6d02a66e Dec 13 14:32:20.462245 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:32:20.486746 kernel: iscsi: registered transport (tcp) Dec 13 14:32:20.520900 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:32:20.520973 kernel: QLogic iSCSI HBA Driver Dec 13 14:32:20.564049 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:32:20.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:20.566905 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:32:20.623766 kernel: raid6: avx512x4 gen() 15275 MB/s Dec 13 14:32:20.641769 kernel: raid6: avx512x4 xor() 6480 MB/s Dec 13 14:32:20.659768 kernel: raid6: avx512x2 gen() 8544 MB/s Dec 13 14:32:20.676769 kernel: raid6: avx512x2 xor() 20844 MB/s Dec 13 14:32:20.693758 kernel: raid6: avx512x1 gen() 15621 MB/s Dec 13 14:32:20.711759 kernel: raid6: avx512x1 xor() 19917 MB/s Dec 13 14:32:20.728768 kernel: raid6: avx2x4 gen() 16483 MB/s Dec 13 14:32:20.745756 kernel: raid6: avx2x4 xor() 7108 MB/s Dec 13 14:32:20.762758 kernel: raid6: avx2x2 gen() 6060 MB/s Dec 13 14:32:20.779773 kernel: raid6: avx2x2 xor() 14876 MB/s Dec 13 14:32:20.796770 kernel: raid6: avx2x1 gen() 13204 MB/s Dec 13 14:32:20.814771 kernel: raid6: avx2x1 xor() 14003 MB/s Dec 13 14:32:20.831794 kernel: raid6: sse2x4 gen() 8856 MB/s Dec 13 14:32:20.852854 kernel: raid6: sse2x4 xor() 3213 MB/s Dec 13 14:32:20.871651 kernel: raid6: sse2x2 gen() 4923 MB/s Dec 13 14:32:20.887765 kernel: raid6: sse2x2 xor() 5329 MB/s Dec 13 14:32:20.904772 kernel: raid6: sse2x1 gen() 7828 MB/s Dec 13 14:32:20.923011 kernel: raid6: sse2x1 xor() 3753 MB/s Dec 13 14:32:20.923110 kernel: raid6: using algorithm avx2x4 gen() 16483 MB/s Dec 13 14:32:20.923129 kernel: raid6: .... xor() 7108 MB/s, rmw enabled Dec 13 14:32:20.924223 kernel: raid6: using avx512x2 recovery algorithm Dec 13 14:32:20.941752 kernel: xor: automatically using best checksumming function avx Dec 13 14:32:21.070824 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 14:32:21.079664 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:32:21.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:21.080000 audit: BPF prog-id=7 op=LOAD Dec 13 14:32:21.080000 audit: BPF prog-id=8 op=LOAD Dec 13 14:32:21.082207 systemd[1]: Starting systemd-udevd.service... Dec 13 14:32:21.095014 systemd-udevd[385]: Using default interface naming scheme 'v252'. Dec 13 14:32:21.101382 systemd[1]: Started systemd-udevd.service. Dec 13 14:32:21.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:21.104297 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:32:21.125613 dracut-pre-trigger[394]: rd.md=0: removing MD RAID activation Dec 13 14:32:21.165914 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:32:21.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:21.168478 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:32:21.221841 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:32:21.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:21.295743 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:32:21.343898 kernel: nvme nvme0: pci function 0000:00:04.0 Dec 13 14:32:21.344273 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Dec 13 14:32:21.347002 kernel: ena 0000:00:05.0: ENA device version: 0.10 Dec 13 14:32:21.357098 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Dec 13 14:32:21.357273 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 14:32:21.357294 kernel: ena 0000:00:05.0: LLQ is not supported Fallback to host mode policy. Dec 13 14:32:21.357453 kernel: nvme nvme0: 2/0/0 default/read/poll queues Dec 13 14:32:21.357599 kernel: AES CTR mode by8 optimization enabled Dec 13 14:32:21.357893 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem febf4000, mac addr 06:30:b8:64:5c:a5 Dec 13 14:32:21.361064 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:32:21.361132 kernel: GPT:9289727 != 16777215 Dec 13 14:32:21.361153 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:32:21.361172 kernel: GPT:9289727 != 16777215 Dec 13 14:32:21.361190 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:32:21.361209 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:32:21.363560 (udev-worker)[438]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:32:21.580350 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (439) Dec 13 14:32:21.506433 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:32:21.586171 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:32:21.592051 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:32:21.606785 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:32:21.608054 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:32:21.612960 systemd[1]: Starting disk-uuid.service... Dec 13 14:32:21.639914 disk-uuid[594]: Primary Header is updated. Dec 13 14:32:21.639914 disk-uuid[594]: Secondary Entries is updated. Dec 13 14:32:21.639914 disk-uuid[594]: Secondary Header is updated. Dec 13 14:32:21.654841 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:32:21.668748 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:32:21.677764 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:32:22.681740 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Dec 13 14:32:22.682418 disk-uuid[595]: The operation has completed successfully. Dec 13 14:32:22.851682 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:32:22.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:22.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:22.851816 systemd[1]: Finished disk-uuid.service. Dec 13 14:32:22.864968 systemd[1]: Starting verity-setup.service... Dec 13 14:32:22.883741 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 14:32:22.991847 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:32:22.996607 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:32:23.001353 systemd[1]: Finished verity-setup.service. Dec 13 14:32:23.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:23.128862 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:32:23.129399 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:32:23.131205 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:32:23.133427 systemd[1]: Starting ignition-setup.service... Dec 13 14:32:23.134808 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:32:23.157753 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:32:23.157819 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:32:23.157837 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:32:23.182752 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:32:23.198945 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:32:23.209126 systemd[1]: Finished ignition-setup.service. Dec 13 14:32:23.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:23.212218 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:32:23.241520 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:32:23.249000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:23.253000 audit: BPF prog-id=9 op=LOAD Dec 13 14:32:23.257279 systemd[1]: Starting systemd-networkd.service... Dec 13 14:32:23.296082 systemd-networkd[1107]: lo: Link UP Dec 13 14:32:23.296095 systemd-networkd[1107]: lo: Gained carrier Dec 13 14:32:23.298843 systemd-networkd[1107]: Enumeration completed Dec 13 14:32:23.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:23.298978 systemd[1]: Started systemd-networkd.service. Dec 13 14:32:23.300934 systemd[1]: Reached target network.target. Dec 13 14:32:23.302294 systemd-networkd[1107]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:32:23.306005 systemd[1]: Starting iscsiuio.service... Dec 13 14:32:23.313309 systemd-networkd[1107]: eth0: Link UP Dec 13 14:32:23.314398 systemd-networkd[1107]: eth0: Gained carrier Dec 13 14:32:23.316066 systemd[1]: Started iscsiuio.service. Dec 13 14:32:23.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:23.318280 systemd[1]: Starting iscsid.service... Dec 13 14:32:23.324831 iscsid[1112]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:32:23.324831 iscsid[1112]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:32:23.324831 iscsid[1112]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:32:23.324831 iscsid[1112]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:32:23.324831 iscsid[1112]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:32:23.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:23.340366 iscsid[1112]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:32:23.326605 systemd[1]: Started iscsid.service. Dec 13 14:32:23.335749 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:32:23.339462 systemd-networkd[1107]: eth0: DHCPv4 address 172.31.27.196/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:32:23.357565 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:32:23.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:23.358738 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:32:23.359856 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:32:23.360123 systemd[1]: Reached target remote-fs.target. Dec 13 14:32:23.361784 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:32:23.379650 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:32:23.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:23.583205 ignition[1081]: Ignition 2.14.0 Dec 13 14:32:23.583221 ignition[1081]: Stage: fetch-offline Dec 13 14:32:23.583358 ignition[1081]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:23.583417 ignition[1081]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:32:23.613344 ignition[1081]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:32:23.615037 ignition[1081]: Ignition finished successfully Dec 13 14:32:23.617263 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:32:23.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:23.618484 systemd[1]: Starting ignition-fetch.service... Dec 13 14:32:23.628522 ignition[1131]: Ignition 2.14.0 Dec 13 14:32:23.628535 ignition[1131]: Stage: fetch Dec 13 14:32:23.628749 ignition[1131]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:23.628783 ignition[1131]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:32:23.637888 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:32:23.640903 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:32:23.652682 ignition[1131]: INFO : PUT result: OK Dec 13 14:32:23.655351 ignition[1131]: DEBUG : parsed url from cmdline: "" Dec 13 14:32:23.655351 ignition[1131]: INFO : no config URL provided Dec 13 14:32:23.655351 ignition[1131]: INFO : reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:32:23.655351 ignition[1131]: INFO : no config at "/usr/lib/ignition/user.ign" Dec 13 14:32:23.662340 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:32:23.662340 ignition[1131]: INFO : PUT result: OK Dec 13 14:32:23.664851 ignition[1131]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Dec 13 14:32:23.666607 ignition[1131]: INFO : GET result: OK Dec 13 14:32:23.667566 ignition[1131]: DEBUG : parsing config with SHA512: f4b374cf1b38adbf3b692f3c19268192dfd292965abb060ee0991f4d22f2e5071f6fae6994b1f6f8abeeccb0374886e594813890ced0adfc565c347b2d8c3aa1 Dec 13 14:32:23.675533 unknown[1131]: fetched base config from "system" Dec 13 14:32:23.675543 unknown[1131]: fetched base config from "system" Dec 13 14:32:23.675549 unknown[1131]: fetched user config from "aws" Dec 13 14:32:23.679066 ignition[1131]: fetch: fetch complete Dec 13 14:32:23.679079 ignition[1131]: fetch: fetch passed Dec 13 14:32:23.679144 ignition[1131]: Ignition finished successfully Dec 13 14:32:23.682961 systemd[1]: Finished ignition-fetch.service. Dec 13 14:32:23.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:23.685105 systemd[1]: Starting ignition-kargs.service... Dec 13 14:32:23.697181 ignition[1137]: Ignition 2.14.0 Dec 13 14:32:23.697194 ignition[1137]: Stage: kargs Dec 13 14:32:23.697385 ignition[1137]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:23.697417 ignition[1137]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:32:23.706876 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:32:23.708574 ignition[1137]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:32:23.711654 ignition[1137]: INFO : PUT result: OK Dec 13 14:32:23.715425 ignition[1137]: kargs: kargs passed Dec 13 14:32:23.715494 ignition[1137]: Ignition finished successfully Dec 13 14:32:23.718280 systemd[1]: Finished ignition-kargs.service. Dec 13 14:32:23.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:23.719605 systemd[1]: Starting ignition-disks.service... Dec 13 14:32:23.733247 ignition[1143]: Ignition 2.14.0 Dec 13 14:32:23.733272 ignition[1143]: Stage: disks Dec 13 14:32:23.733467 ignition[1143]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:23.733497 ignition[1143]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:32:23.746443 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:32:23.749098 ignition[1143]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:32:23.752385 ignition[1143]: INFO : PUT result: OK Dec 13 14:32:23.759359 ignition[1143]: disks: disks passed Dec 13 14:32:23.759472 ignition[1143]: Ignition finished successfully Dec 13 14:32:23.764773 systemd[1]: Finished ignition-disks.service. Dec 13 14:32:23.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:23.770645 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:32:23.771200 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:32:23.771378 systemd[1]: Reached target local-fs.target. Dec 13 14:32:23.773196 systemd[1]: Reached target sysinit.target. Dec 13 14:32:23.774929 systemd[1]: Reached target basic.target. Dec 13 14:32:23.777996 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:32:23.811115 systemd-fsck[1151]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 14:32:23.815633 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:32:23.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:23.816994 systemd[1]: Mounting sysroot.mount... Dec 13 14:32:23.857417 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:32:23.858780 systemd[1]: Mounted sysroot.mount. Dec 13 14:32:23.859908 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:32:23.867735 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:32:23.869860 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:32:23.869930 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:32:23.872605 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:32:23.880475 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:32:23.883209 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:32:23.890461 initrd-setup-root[1172]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:32:23.898960 initrd-setup-root[1180]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:32:23.905033 initrd-setup-root[1188]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:32:23.912879 initrd-setup-root[1196]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:32:24.027820 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:32:24.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:24.029102 systemd[1]: Starting ignition-mount.service... Dec 13 14:32:24.036236 systemd[1]: Starting sysroot-boot.service... Dec 13 14:32:24.042641 bash[1213]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:32:24.043548 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:32:24.060123 ignition[1215]: INFO : Ignition 2.14.0 Dec 13 14:32:24.060123 ignition[1215]: INFO : Stage: mount Dec 13 14:32:24.062298 ignition[1215]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:24.062298 ignition[1215]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:32:24.075777 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1214) Dec 13 14:32:24.079977 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-intel) checksum algorithm Dec 13 14:32:24.080039 kernel: BTRFS info (device nvme0n1p6): using free space tree Dec 13 14:32:24.080057 kernel: BTRFS info (device nvme0n1p6): has skinny extents Dec 13 14:32:24.084098 ignition[1215]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:32:24.086011 ignition[1215]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:32:24.089036 ignition[1215]: INFO : PUT result: OK Dec 13 14:32:24.095900 ignition[1215]: INFO : mount: mount passed Dec 13 14:32:24.100018 ignition[1215]: INFO : Ignition finished successfully Dec 13 14:32:24.103986 systemd[1]: Finished ignition-mount.service. Dec 13 14:32:24.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:24.115748 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Dec 13 14:32:24.117429 systemd[1]: Finished sysroot-boot.service. Dec 13 14:32:24.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:24.125502 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:32:24.128471 systemd[1]: Starting ignition-files.service... Dec 13 14:32:24.155459 ignition[1244]: INFO : Ignition 2.14.0 Dec 13 14:32:24.155459 ignition[1244]: INFO : Stage: files Dec 13 14:32:24.158121 ignition[1244]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:24.158121 ignition[1244]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:32:24.169694 ignition[1244]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:32:24.171438 ignition[1244]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:32:24.173538 ignition[1244]: INFO : PUT result: OK Dec 13 14:32:24.177755 ignition[1244]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:32:24.186241 ignition[1244]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:32:24.186241 ignition[1244]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:32:24.203207 ignition[1244]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:32:24.205733 ignition[1244]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:32:24.207970 unknown[1244]: wrote ssh authorized keys file for user: core Dec 13 14:32:24.209536 ignition[1244]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:32:24.222912 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:32:24.231081 ignition[1244]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 14:32:24.320303 ignition[1244]: INFO : GET result: OK Dec 13 14:32:24.592038 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 14:32:24.592038 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:32:24.598256 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:32:24.598256 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:32:24.603505 ignition[1244]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:32:24.610523 ignition[1244]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1671064677" Dec 13 14:32:24.614178 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1247) Dec 13 14:32:24.614206 ignition[1244]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1671064677": device or resource busy Dec 13 14:32:24.614206 ignition[1244]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1671064677", trying btrfs: device or resource busy Dec 13 14:32:24.614206 ignition[1244]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1671064677" Dec 13 14:32:24.621353 ignition[1244]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1671064677" Dec 13 14:32:24.623059 ignition[1244]: INFO : op(3): [started] unmounting "/mnt/oem1671064677" Dec 13 14:32:24.625467 systemd[1]: mnt-oem1671064677.mount: Deactivated successfully. Dec 13 14:32:24.626698 ignition[1244]: INFO : op(3): [finished] unmounting "/mnt/oem1671064677" Dec 13 14:32:24.628178 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Dec 13 14:32:24.631811 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:32:24.631811 ignition[1244]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 14:32:24.759915 systemd-networkd[1107]: eth0: Gained IPv6LL Dec 13 14:32:25.076634 ignition[1244]: INFO : GET result: OK Dec 13 14:32:25.224454 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 14:32:25.228760 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:32:25.228760 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:32:25.228760 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:32:25.238876 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:32:25.238876 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:32:25.238876 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:32:25.238876 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:32:25.252906 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:32:25.252906 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:32:25.252906 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:32:25.252906 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:32:25.273899 ignition[1244]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:32:25.275600 ignition[1244]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2067630840" Dec 13 14:32:25.275600 ignition[1244]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2067630840": device or resource busy Dec 13 14:32:25.275600 ignition[1244]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2067630840", trying btrfs: device or resource busy Dec 13 14:32:25.275600 ignition[1244]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2067630840" Dec 13 14:32:25.282815 ignition[1244]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2067630840" Dec 13 14:32:25.282815 ignition[1244]: INFO : op(6): [started] unmounting "/mnt/oem2067630840" Dec 13 14:32:25.282815 ignition[1244]: INFO : op(6): [finished] unmounting "/mnt/oem2067630840" Dec 13 14:32:25.282815 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Dec 13 14:32:25.282815 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:32:25.282815 ignition[1244]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:32:25.296456 systemd[1]: mnt-oem2067630840.mount: Deactivated successfully. Dec 13 14:32:25.306137 ignition[1244]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem787918719" Dec 13 14:32:25.307812 ignition[1244]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem787918719": device or resource busy Dec 13 14:32:25.307812 ignition[1244]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem787918719", trying btrfs: device or resource busy Dec 13 14:32:25.307812 ignition[1244]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem787918719" Dec 13 14:32:25.313237 ignition[1244]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem787918719" Dec 13 14:32:25.313237 ignition[1244]: INFO : op(9): [started] unmounting "/mnt/oem787918719" Dec 13 14:32:25.313237 ignition[1244]: INFO : op(9): [finished] unmounting "/mnt/oem787918719" Dec 13 14:32:25.313237 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Dec 13 14:32:25.313237 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:32:25.313237 ignition[1244]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Dec 13 14:32:25.798683 ignition[1244]: INFO : GET result: OK Dec 13 14:32:26.086547 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Dec 13 14:32:26.086547 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:32:26.091904 ignition[1244]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Dec 13 14:32:26.102432 ignition[1244]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem216649768" Dec 13 14:32:26.104275 ignition[1244]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem216649768": device or resource busy Dec 13 14:32:26.104275 ignition[1244]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem216649768", trying btrfs: device or resource busy Dec 13 14:32:26.104275 ignition[1244]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem216649768" Dec 13 14:32:26.113670 ignition[1244]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem216649768" Dec 13 14:32:26.113670 ignition[1244]: INFO : op(c): [started] unmounting "/mnt/oem216649768" Dec 13 14:32:26.117295 ignition[1244]: INFO : op(c): [finished] unmounting "/mnt/oem216649768" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(13): [started] processing unit "nvidia.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(13): [finished] processing unit "nvidia.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(16): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(17): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(17): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(18): [started] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(18): [finished] setting preset to enabled for "amazon-ssm-agent.service" Dec 13 14:32:26.117295 ignition[1244]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Dec 13 14:32:26.116977 systemd[1]: mnt-oem216649768.mount: Deactivated successfully. Dec 13 14:32:26.168923 ignition[1244]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Dec 13 14:32:26.168923 ignition[1244]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:32:26.168923 ignition[1244]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:32:26.168923 ignition[1244]: INFO : files: files passed Dec 13 14:32:26.168923 ignition[1244]: INFO : Ignition finished successfully Dec 13 14:32:26.188952 kernel: kauditd_printk_skb: 26 callbacks suppressed Dec 13 14:32:26.188988 kernel: audit: type=1130 audit(1734100346.171:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.170833 systemd[1]: Finished ignition-files.service. Dec 13 14:32:26.181955 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:32:26.186107 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:32:26.187832 systemd[1]: Starting ignition-quench.service... Dec 13 14:32:26.199357 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:32:26.199596 systemd[1]: Finished ignition-quench.service. Dec 13 14:32:26.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.204766 initrd-setup-root-after-ignition[1269]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:32:26.213841 kernel: audit: type=1130 audit(1734100346.201:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.213885 kernel: audit: type=1131 audit(1734100346.201:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.201000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.214129 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:32:26.216643 systemd[1]: Reached target ignition-complete.target. Dec 13 14:32:26.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.225921 kernel: audit: type=1130 audit(1734100346.214:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.224550 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:32:26.249208 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:32:26.249335 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:32:26.253197 systemd[1]: Reached target initrd-fs.target. Dec 13 14:32:26.256492 systemd[1]: Reached target initrd.target. Dec 13 14:32:26.267500 kernel: audit: type=1130 audit(1734100346.251:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.267535 kernel: audit: type=1131 audit(1734100346.251:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.267563 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:32:26.269980 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:32:26.286679 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:32:26.299763 kernel: audit: type=1130 audit(1734100346.288:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.295891 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:32:26.312859 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:32:26.316029 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:32:26.318600 systemd[1]: Stopped target timers.target. Dec 13 14:32:26.321074 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:32:26.323296 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:32:26.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.329133 systemd[1]: Stopped target initrd.target. Dec 13 14:32:26.337727 kernel: audit: type=1131 audit(1734100346.326:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.335354 systemd[1]: Stopped target basic.target. Dec 13 14:32:26.338823 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:32:26.341096 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:32:26.342880 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:32:26.345030 systemd[1]: Stopped target remote-fs.target. Dec 13 14:32:26.346673 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:32:26.349374 systemd[1]: Stopped target sysinit.target. Dec 13 14:32:26.351986 systemd[1]: Stopped target local-fs.target. Dec 13 14:32:26.353769 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:32:26.355342 systemd[1]: Stopped target swap.target. Dec 13 14:32:26.356820 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:32:26.357884 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:32:26.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.363079 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:32:26.364898 kernel: audit: type=1131 audit(1734100346.357:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.364813 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:32:26.366147 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:32:26.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.367943 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:32:26.374801 kernel: audit: type=1131 audit(1734100346.366:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.368180 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:32:26.373009 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:32:26.373127 systemd[1]: Stopped ignition-files.service. Dec 13 14:32:26.376732 systemd[1]: Stopping ignition-mount.service... Dec 13 14:32:26.381829 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:32:26.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.381986 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:32:26.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.385145 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:32:26.385240 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:32:26.385357 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:32:26.391266 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:32:26.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.391529 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:32:26.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.407139 ignition[1282]: INFO : Ignition 2.14.0 Dec 13 14:32:26.407139 ignition[1282]: INFO : Stage: umount Dec 13 14:32:26.407139 ignition[1282]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 14:32:26.407139 ignition[1282]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Dec 13 14:32:26.396383 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:32:26.396691 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:32:26.419548 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:32:26.425330 ignition[1282]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Dec 13 14:32:26.426823 ignition[1282]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Dec 13 14:32:26.428408 ignition[1282]: INFO : PUT result: OK Dec 13 14:32:26.431718 ignition[1282]: INFO : umount: umount passed Dec 13 14:32:26.432802 ignition[1282]: INFO : Ignition finished successfully Dec 13 14:32:26.433862 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:32:26.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.433982 systemd[1]: Stopped ignition-mount.service. Dec 13 14:32:26.435139 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:32:26.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.435197 systemd[1]: Stopped ignition-disks.service. Dec 13 14:32:26.437033 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:32:26.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.437089 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:32:26.438786 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 14:32:26.438836 systemd[1]: Stopped ignition-fetch.service. Dec 13 14:32:26.440770 systemd[1]: Stopped target network.target. Dec 13 14:32:26.441745 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:32:26.442465 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:32:26.444790 systemd[1]: Stopped target paths.target. Dec 13 14:32:26.453200 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:32:26.456794 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:32:26.457906 systemd[1]: Stopped target slices.target. Dec 13 14:32:26.457949 systemd[1]: Stopped target sockets.target. Dec 13 14:32:26.462093 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:32:26.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.462225 systemd[1]: Closed iscsid.socket. Dec 13 14:32:26.464200 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:32:26.464237 systemd[1]: Closed iscsiuio.socket. Dec 13 14:32:26.465926 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:32:26.465995 systemd[1]: Stopped ignition-setup.service. Dec 13 14:32:26.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.467128 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:32:26.468163 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:32:26.471498 systemd-networkd[1107]: eth0: DHCPv6 lease lost Dec 13 14:32:26.474986 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:32:26.475171 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:32:26.483682 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:32:26.487045 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:32:26.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.488000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:32:26.489000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:32:26.490849 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:32:26.490903 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:32:26.497054 systemd[1]: Stopping network-cleanup.service... Dec 13 14:32:26.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.498366 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:32:26.498441 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:32:26.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.500880 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:32:26.500947 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:32:26.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.504375 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:32:26.504427 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:32:26.517645 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:32:26.528947 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:32:26.537049 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:32:26.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.537189 systemd[1]: Stopped network-cleanup.service. Dec 13 14:32:26.542132 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:32:26.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.542300 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:32:26.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.543483 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:32:26.543535 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:32:26.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.544534 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:32:26.544585 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:32:26.545574 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:32:26.545633 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:32:26.548495 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:32:26.554045 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:32:26.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.560579 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:32:26.560690 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:32:26.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.567991 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:32:26.574821 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:32:26.574900 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:32:26.577654 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:32:26.579276 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:32:26.585933 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:32:26.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.588000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.587973 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:32:26.592772 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:32:26.595986 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:32:26.598746 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:32:26.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:26.604015 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:32:26.622773 systemd[1]: Switching root. Dec 13 14:32:26.655135 iscsid[1112]: iscsid shutting down. Dec 13 14:32:26.656526 systemd-journald[185]: Received SIGTERM from PID 1 (n/a). Dec 13 14:32:26.656596 systemd-journald[185]: Journal stopped Dec 13 14:32:31.533251 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:32:31.533319 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:32:31.533343 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:32:31.533364 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:32:31.533388 kernel: SELinux: policy capability open_perms=1 Dec 13 14:32:31.533405 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:32:31.533422 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:32:31.533442 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:32:31.533458 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:32:31.533475 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:32:31.533492 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:32:31.533510 systemd[1]: Successfully loaded SELinux policy in 91.489ms. Dec 13 14:32:31.533538 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.341ms. Dec 13 14:32:31.533559 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:32:31.533577 systemd[1]: Detected virtualization amazon. Dec 13 14:32:31.533598 systemd[1]: Detected architecture x86-64. Dec 13 14:32:31.533615 systemd[1]: Detected first boot. Dec 13 14:32:31.533633 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:32:31.533651 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:32:31.533744 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:32:31.533766 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:32:31.533795 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:32:31.533819 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:32:31.533837 kernel: kauditd_printk_skb: 46 callbacks suppressed Dec 13 14:32:31.533853 kernel: audit: type=1334 audit(1734100351.198:86): prog-id=12 op=LOAD Dec 13 14:32:31.533870 kernel: audit: type=1334 audit(1734100351.198:87): prog-id=3 op=UNLOAD Dec 13 14:32:31.533887 kernel: audit: type=1334 audit(1734100351.200:88): prog-id=13 op=LOAD Dec 13 14:32:31.533903 kernel: audit: type=1334 audit(1734100351.204:89): prog-id=14 op=LOAD Dec 13 14:32:31.533919 kernel: audit: type=1334 audit(1734100351.204:90): prog-id=4 op=UNLOAD Dec 13 14:32:31.533937 kernel: audit: type=1334 audit(1734100351.204:91): prog-id=5 op=UNLOAD Dec 13 14:32:31.533954 kernel: audit: type=1334 audit(1734100351.207:92): prog-id=15 op=LOAD Dec 13 14:32:31.533970 kernel: audit: type=1334 audit(1734100351.207:93): prog-id=12 op=UNLOAD Dec 13 14:32:31.533988 kernel: audit: type=1334 audit(1734100351.208:94): prog-id=16 op=LOAD Dec 13 14:32:31.534005 kernel: audit: type=1334 audit(1734100351.209:95): prog-id=17 op=LOAD Dec 13 14:32:31.534021 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:32:31.534039 systemd[1]: Stopped iscsiuio.service. Dec 13 14:32:31.534057 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:32:31.534075 systemd[1]: Stopped iscsid.service. Dec 13 14:32:31.534100 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 14:32:31.534118 systemd[1]: Stopped initrd-switch-root.service. Dec 13 14:32:31.534137 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 14:32:31.534156 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:32:31.534174 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:32:31.534194 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 14:32:31.534298 systemd[1]: Created slice system-getty.slice. Dec 13 14:32:31.534321 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:32:31.534340 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:32:31.534360 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:32:31.534381 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:32:31.534401 systemd[1]: Created slice user.slice. Dec 13 14:32:31.534420 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:32:31.534438 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:32:31.534456 systemd[1]: Set up automount boot.automount. Dec 13 14:32:31.534476 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:32:31.534499 systemd[1]: Stopped target initrd-switch-root.target. Dec 13 14:32:31.534519 systemd[1]: Stopped target initrd-fs.target. Dec 13 14:32:31.534540 systemd[1]: Stopped target initrd-root-fs.target. Dec 13 14:32:31.534561 systemd[1]: Reached target integritysetup.target. Dec 13 14:32:31.534581 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:32:31.534602 systemd[1]: Reached target remote-fs.target. Dec 13 14:32:31.534623 systemd[1]: Reached target slices.target. Dec 13 14:32:31.534643 systemd[1]: Reached target swap.target. Dec 13 14:32:31.534663 systemd[1]: Reached target torcx.target. Dec 13 14:32:31.534683 systemd[1]: Reached target veritysetup.target. Dec 13 14:32:31.534705 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:32:31.534736 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:32:31.534755 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:32:31.534773 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:32:31.534791 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:32:31.534808 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:32:31.534827 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:32:31.534845 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:32:31.534865 systemd[1]: Mounting media.mount... Dec 13 14:32:31.534887 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:31.535959 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:32:31.535985 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:32:31.536006 systemd[1]: Mounting tmp.mount... Dec 13 14:32:31.536027 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:32:31.536049 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:31.536070 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:32:31.536090 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:32:31.536112 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:31.536135 systemd[1]: Starting modprobe@drm.service... Dec 13 14:32:31.536155 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:31.536177 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:32:31.536197 systemd[1]: Starting modprobe@loop.service... Dec 13 14:32:31.536217 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:32:31.536239 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 14:32:31.536259 systemd[1]: Stopped systemd-fsck-root.service. Dec 13 14:32:31.536279 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 14:32:31.536298 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 14:32:31.536320 systemd[1]: Stopped systemd-journald.service. Dec 13 14:32:31.536447 systemd[1]: Starting systemd-journald.service... Dec 13 14:32:31.536476 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:32:31.536494 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:32:31.536512 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:32:31.536541 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:32:31.536563 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 14:32:31.536584 systemd[1]: Stopped verity-setup.service. Dec 13 14:32:31.536606 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:31.536625 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:32:31.536642 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:32:31.536660 systemd[1]: Mounted media.mount. Dec 13 14:32:31.536680 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:32:31.536698 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:32:31.539745 systemd[1]: Mounted tmp.mount. Dec 13 14:32:31.539791 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:32:31.539810 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:32:31.539828 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:32:31.539845 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:31.539863 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:31.539886 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:32:31.539903 systemd[1]: Finished modprobe@drm.service. Dec 13 14:32:31.539921 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:31.539938 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:31.539959 kernel: loop: module loaded Dec 13 14:32:31.539977 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:32:31.539995 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:32:31.540013 systemd[1]: Finished modprobe@loop.service. Dec 13 14:32:31.540034 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:32:31.540052 systemd[1]: Reached target network-pre.target. Dec 13 14:32:31.540070 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:32:31.540098 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:32:31.540122 kernel: fuse: init (API version 7.34) Dec 13 14:32:31.540141 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:32:31.540169 systemd-journald[1398]: Journal started Dec 13 14:32:31.540251 systemd-journald[1398]: Runtime Journal (/run/log/journal/ec2984d379b57a63129f77e4e559f760) is 4.8M, max 38.7M, 33.9M free. Dec 13 14:32:31.540306 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:32:27.086000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 14:32:27.200000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:32:27.200000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:32:27.203000 audit: BPF prog-id=10 op=LOAD Dec 13 14:32:27.203000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:32:27.203000 audit: BPF prog-id=11 op=LOAD Dec 13 14:32:27.203000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:32:27.445000 audit[1315]: AVC avc: denied { associate } for pid=1315 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:32:27.445000 audit[1315]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8b2 a1=c0000cede0 a2=c0000d70c0 a3=32 items=0 ppid=1298 pid=1315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:27.445000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:32:27.451000 audit[1315]: AVC avc: denied { associate } for pid=1315 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:32:27.451000 audit[1315]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d989 a2=1ed a3=0 items=2 ppid=1298 pid=1315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:27.451000 audit: CWD cwd="/" Dec 13 14:32:27.451000 audit: PATH item=0 name=(null) inode=2 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.451000 audit: PATH item=1 name=(null) inode=3 dev=00:1b mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:27.451000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:32:31.198000 audit: BPF prog-id=12 op=LOAD Dec 13 14:32:31.198000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:32:31.200000 audit: BPF prog-id=13 op=LOAD Dec 13 14:32:31.204000 audit: BPF prog-id=14 op=LOAD Dec 13 14:32:31.204000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:32:31.204000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:32:31.207000 audit: BPF prog-id=15 op=LOAD Dec 13 14:32:31.207000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:32:31.208000 audit: BPF prog-id=16 op=LOAD Dec 13 14:32:31.209000 audit: BPF prog-id=17 op=LOAD Dec 13 14:32:31.210000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:32:31.210000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:32:31.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.217000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:32:31.218000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.424000 audit: BPF prog-id=18 op=LOAD Dec 13 14:32:31.424000 audit: BPF prog-id=19 op=LOAD Dec 13 14:32:31.424000 audit: BPF prog-id=20 op=LOAD Dec 13 14:32:31.424000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:32:31.424000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:32:31.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.497000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.524000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:32:31.524000 audit[1398]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffedeb4c5f0 a2=4000 a3=7ffedeb4c68c items=0 ppid=1 pid=1398 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:31.524000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:32:27.443144 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:32:31.196985 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:32:27.444423 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:32:31.213136 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 14:32:27.444442 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:32:27.444474 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Dec 13 14:32:27.444484 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=debug msg="skipped missing lower profile" missing profile=oem Dec 13 14:32:27.444514 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Dec 13 14:32:31.562756 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:32:31.562818 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:32:31.562842 systemd[1]: Started systemd-journald.service. Dec 13 14:32:31.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:27.444527 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Dec 13 14:32:31.558918 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:32:27.444834 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Dec 13 14:32:31.559119 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:32:27.444879 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Dec 13 14:32:31.560697 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:32:27.444899 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Dec 13 14:32:31.562002 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:32:27.445995 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Dec 13 14:32:27.446029 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Dec 13 14:32:27.446046 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.6: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.6 Dec 13 14:32:27.446060 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Dec 13 14:32:27.446076 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.6: no such file or directory" path=/var/lib/torcx/store/3510.3.6 Dec 13 14:32:27.446088 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:27Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Dec 13 14:32:30.488951 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:30Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:32:30.489207 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:30Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:32:31.564871 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:32:30.489310 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:30Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:32:30.489491 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:30Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Dec 13 14:32:30.489540 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:30Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Dec 13 14:32:30.489595 /usr/lib/systemd/system-generators/torcx-generator[1315]: time="2024-12-13T14:32:30Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Dec 13 14:32:31.573491 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:32:31.576426 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:32:31.588360 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:32:31.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.591980 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:32:31.593548 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:32:31.600340 systemd-journald[1398]: Time spent on flushing to /var/log/journal/ec2984d379b57a63129f77e4e559f760 is 96.652ms for 1188 entries. Dec 13 14:32:31.600340 systemd-journald[1398]: System Journal (/var/log/journal/ec2984d379b57a63129f77e4e559f760) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:32:31.717257 systemd-journald[1398]: Received client request to flush runtime journal. Dec 13 14:32:31.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.629622 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:32:31.666899 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:32:31.670252 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:32:31.701339 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:32:31.704026 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:32:31.718597 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:32:31.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.720580 udevadm[1430]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:32:31.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:31.792469 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:32:32.273017 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:32:32.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:32.273000 audit: BPF prog-id=21 op=LOAD Dec 13 14:32:32.273000 audit: BPF prog-id=22 op=LOAD Dec 13 14:32:32.273000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:32:32.273000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:32:32.275664 systemd[1]: Starting systemd-udevd.service... Dec 13 14:32:32.321296 systemd-udevd[1432]: Using default interface naming scheme 'v252'. Dec 13 14:32:32.385092 systemd[1]: Started systemd-udevd.service. Dec 13 14:32:32.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:32.385000 audit: BPF prog-id=23 op=LOAD Dec 13 14:32:32.388465 systemd[1]: Starting systemd-networkd.service... Dec 13 14:32:32.436000 audit: BPF prog-id=24 op=LOAD Dec 13 14:32:32.436000 audit: BPF prog-id=25 op=LOAD Dec 13 14:32:32.436000 audit: BPF prog-id=26 op=LOAD Dec 13 14:32:32.436234 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Dec 13 14:32:32.438999 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:32:32.447449 (udev-worker)[1442]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:32:32.490088 systemd[1]: Started systemd-userdbd.service. Dec 13 14:32:32.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:32.550747 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 14:32:32.573736 kernel: ACPI: button: Power Button [PWRF] Dec 13 14:32:32.573836 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Dec 13 14:32:32.596772 kernel: ACPI: button: Sleep Button [SLPF] Dec 13 14:32:32.588000 audit[1448]: AVC avc: denied { confidentiality } for pid=1448 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 14:32:32.588000 audit[1448]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5625b6df3300 a1=337fc a2=7f4147425bc5 a3=5 items=110 ppid=1432 pid=1448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:32.588000 audit: CWD cwd="/" Dec 13 14:32:32.588000 audit: PATH item=0 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=1 name=(null) inode=13800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=2 name=(null) inode=13800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=3 name=(null) inode=13801 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=4 name=(null) inode=13800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=5 name=(null) inode=13802 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=6 name=(null) inode=13800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=7 name=(null) inode=13803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=8 name=(null) inode=13803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=9 name=(null) inode=13804 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=10 name=(null) inode=13803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=11 name=(null) inode=13805 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=12 name=(null) inode=13803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=13 name=(null) inode=13806 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=14 name=(null) inode=13803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=15 name=(null) inode=13807 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=16 name=(null) inode=13803 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=17 name=(null) inode=13808 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=18 name=(null) inode=13800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=19 name=(null) inode=13809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=20 name=(null) inode=13809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=21 name=(null) inode=13810 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=22 name=(null) inode=13809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=23 name=(null) inode=13811 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=24 name=(null) inode=13809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=25 name=(null) inode=13812 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=26 name=(null) inode=13809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=27 name=(null) inode=13813 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=28 name=(null) inode=13809 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=29 name=(null) inode=13814 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=30 name=(null) inode=13800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=31 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=32 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=33 name=(null) inode=13816 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=34 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=35 name=(null) inode=13817 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=36 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=37 name=(null) inode=13818 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=38 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=39 name=(null) inode=13819 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=40 name=(null) inode=13815 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=41 name=(null) inode=13820 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=42 name=(null) inode=13800 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=43 name=(null) inode=13821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=44 name=(null) inode=13821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=45 name=(null) inode=13822 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=46 name=(null) inode=13821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=47 name=(null) inode=13823 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=48 name=(null) inode=13821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=49 name=(null) inode=13824 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=50 name=(null) inode=13821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=51 name=(null) inode=13825 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=52 name=(null) inode=13821 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=53 name=(null) inode=13826 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=54 name=(null) inode=1041 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=55 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=56 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=57 name=(null) inode=13828 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=58 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=59 name=(null) inode=13829 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=60 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=61 name=(null) inode=13830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=62 name=(null) inode=13830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=63 name=(null) inode=13831 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=64 name=(null) inode=13830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=65 name=(null) inode=13832 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=66 name=(null) inode=13830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=67 name=(null) inode=13833 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=68 name=(null) inode=13830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=69 name=(null) inode=13834 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=70 name=(null) inode=13830 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=71 name=(null) inode=13835 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=72 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=73 name=(null) inode=13836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=74 name=(null) inode=13836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=75 name=(null) inode=13837 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=76 name=(null) inode=13836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=77 name=(null) inode=13838 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=78 name=(null) inode=13836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=79 name=(null) inode=13839 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=80 name=(null) inode=13836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=81 name=(null) inode=13840 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=82 name=(null) inode=13836 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=83 name=(null) inode=13841 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=84 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=85 name=(null) inode=13842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=86 name=(null) inode=13842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=87 name=(null) inode=13843 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=88 name=(null) inode=13842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=89 name=(null) inode=13844 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=90 name=(null) inode=13842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=91 name=(null) inode=13845 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=92 name=(null) inode=13842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=93 name=(null) inode=13846 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=94 name=(null) inode=13842 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=95 name=(null) inode=13847 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=96 name=(null) inode=13827 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=97 name=(null) inode=13848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=98 name=(null) inode=13848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=99 name=(null) inode=13849 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=100 name=(null) inode=13848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=101 name=(null) inode=13850 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=102 name=(null) inode=13848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=103 name=(null) inode=13851 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=104 name=(null) inode=13848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=105 name=(null) inode=13852 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=106 name=(null) inode=13848 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=107 name=(null) inode=13853 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=108 name=(null) inode=1 dev=00:07 mode=040700 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PATH item=109 name=(null) inode=13854 dev=00:07 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:debugfs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:32:32.588000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 14:32:32.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:32.616676 systemd-networkd[1441]: lo: Link UP Dec 13 14:32:32.616683 systemd-networkd[1441]: lo: Gained carrier Dec 13 14:32:32.618161 systemd-networkd[1441]: Enumeration completed Dec 13 14:32:32.618289 systemd[1]: Started systemd-networkd.service. Dec 13 14:32:32.620979 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:32:32.622822 systemd-networkd[1441]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:32:32.628282 systemd-networkd[1441]: eth0: Link UP Dec 13 14:32:32.628868 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:32:32.628448 systemd-networkd[1441]: eth0: Gained carrier Dec 13 14:32:32.638906 systemd-networkd[1441]: eth0: DHCPv4 address 172.31.27.196/20, gateway 172.31.16.1 acquired from 172.31.16.1 Dec 13 14:32:32.659751 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input5 Dec 13 14:32:32.670749 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1440) Dec 13 14:32:32.680750 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0xb100, revision 255 Dec 13 14:32:32.685753 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 14:32:32.777785 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:32:32.860153 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:32:32.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:32.862404 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:32:32.913660 lvm[1546]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:32:32.949211 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:32:32.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:32.950419 systemd[1]: Reached target cryptsetup.target. Dec 13 14:32:32.959504 systemd[1]: Starting lvm2-activation.service... Dec 13 14:32:32.971548 lvm[1547]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:32:32.996066 systemd[1]: Finished lvm2-activation.service. Dec 13 14:32:32.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:32.997176 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:32:32.998121 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:32:32.998150 systemd[1]: Reached target local-fs.target. Dec 13 14:32:32.999036 systemd[1]: Reached target machines.target. Dec 13 14:32:33.001230 systemd[1]: Starting ldconfig.service... Dec 13 14:32:33.003022 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:33.003172 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:33.004412 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:32:33.006558 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:32:33.009957 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:32:33.012557 systemd[1]: Starting systemd-sysext.service... Dec 13 14:32:33.038232 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:32:33.046592 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:32:33.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.050916 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:32:33.051202 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:32:33.053361 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1549 (bootctl) Dec 13 14:32:33.055396 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:32:33.078754 kernel: loop0: detected capacity change from 0 to 205544 Dec 13 14:32:33.261742 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:32:33.271161 systemd-fsck[1558]: fsck.fat 4.2 (2021-01-31) Dec 13 14:32:33.271161 systemd-fsck[1558]: /dev/nvme0n1p1: 789 files, 119291/258078 clusters Dec 13 14:32:33.274570 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:32:33.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.277307 systemd[1]: Mounting boot.mount... Dec 13 14:32:33.296761 kernel: loop1: detected capacity change from 0 to 205544 Dec 13 14:32:33.318406 systemd[1]: Mounted boot.mount. Dec 13 14:32:33.327577 (sd-sysext)[1562]: Using extensions 'kubernetes'. Dec 13 14:32:33.329083 (sd-sysext)[1562]: Merged extensions into '/usr'. Dec 13 14:32:33.350447 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:32:33.351303 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:32:33.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.374647 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:32:33.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.376579 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:33.379127 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:32:33.380308 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:33.383795 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:33.386979 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:33.390046 systemd[1]: Starting modprobe@loop.service... Dec 13 14:32:33.392054 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:33.392255 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:33.392479 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:33.396902 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:32:33.398298 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:33.398473 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:33.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.400270 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:32:33.400444 systemd[1]: Finished modprobe@loop.service. Dec 13 14:32:33.402128 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:32:33.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.402507 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:33.402632 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:33.404430 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:32:33.406970 systemd[1]: Finished systemd-sysext.service. Dec 13 14:32:33.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.409481 systemd[1]: Starting ensure-sysext.service... Dec 13 14:32:33.412928 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:32:33.421141 systemd[1]: Reloading. Dec 13 14:32:33.462552 systemd-tmpfiles[1580]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:32:33.471834 systemd-tmpfiles[1580]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:32:33.488209 systemd-tmpfiles[1580]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:32:33.530774 /usr/lib/systemd/system-generators/torcx-generator[1599]: time="2024-12-13T14:32:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:32:33.530829 /usr/lib/systemd/system-generators/torcx-generator[1599]: time="2024-12-13T14:32:33Z" level=info msg="torcx already run" Dec 13 14:32:33.719892 systemd-networkd[1441]: eth0: Gained IPv6LL Dec 13 14:32:33.766680 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:32:33.766708 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:32:33.820418 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:32:33.964000 audit: BPF prog-id=27 op=LOAD Dec 13 14:32:33.966000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:32:33.966000 audit: BPF prog-id=28 op=LOAD Dec 13 14:32:33.966000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:32:33.967000 audit: BPF prog-id=29 op=LOAD Dec 13 14:32:33.967000 audit: BPF prog-id=30 op=LOAD Dec 13 14:32:33.967000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:32:33.967000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:32:33.974000 audit: BPF prog-id=31 op=LOAD Dec 13 14:32:33.974000 audit: BPF prog-id=32 op=LOAD Dec 13 14:32:33.974000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:32:33.974000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:32:33.977000 audit: BPF prog-id=33 op=LOAD Dec 13 14:32:33.977000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:32:33.977000 audit: BPF prog-id=34 op=LOAD Dec 13 14:32:33.977000 audit: BPF prog-id=35 op=LOAD Dec 13 14:32:33.977000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:32:33.977000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:32:33.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:33.984162 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:32:33.986506 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:32:33.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.009598 systemd[1]: Starting audit-rules.service... Dec 13 14:32:34.018488 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:32:34.024501 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:32:34.027000 audit: BPF prog-id=36 op=LOAD Dec 13 14:32:34.031234 systemd[1]: Starting systemd-resolved.service... Dec 13 14:32:34.033000 audit: BPF prog-id=37 op=LOAD Dec 13 14:32:34.036488 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:32:34.041022 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:32:34.070000 audit[1659]: SYSTEM_BOOT pid=1659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.057124 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:34.057478 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:34.061532 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:34.065063 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:34.068353 systemd[1]: Starting modprobe@loop.service... Dec 13 14:32:34.069917 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:34.070227 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:34.070650 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:34.072056 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:32:34.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.075000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.075612 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:34.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.076041 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:34.081107 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:32:34.084974 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:32:34.086667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:34.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.086848 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:34.088378 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:32:34.091219 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:34.091631 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:34.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.094410 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:34.098573 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:34.099834 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:34.100028 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:34.100188 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:32:34.100301 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:34.101748 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:32:34.101935 systemd[1]: Finished modprobe@loop.service. Dec 13 14:32:34.103682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:34.104135 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:34.105816 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:34.105990 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:34.107516 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:32:34.107669 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:32:34.114176 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:34.114642 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:32:34.118311 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:32:34.123425 systemd[1]: Starting modprobe@drm.service... Dec 13 14:32:34.126518 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:32:34.129851 systemd[1]: Starting modprobe@loop.service... Dec 13 14:32:34.132091 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:32:34.132334 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:34.132535 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:32:34.132660 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 14:32:34.134191 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:32:34.134389 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:32:34.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.136289 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:32:34.136462 systemd[1]: Finished modprobe@drm.service. Dec 13 14:32:34.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.138560 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:32:34.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.138753 systemd[1]: Finished modprobe@loop.service. Dec 13 14:32:34.140165 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:32:34.140284 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:32:34.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.142530 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:32:34.142694 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:32:34.145979 systemd[1]: Finished ensure-sysext.service. Dec 13 14:32:34.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.257293 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:32:34.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:34.258842 systemd[1]: Reached target time-set.target. Dec 13 14:32:34.266165 systemd-resolved[1657]: Positive Trust Anchors: Dec 13 14:32:34.266184 systemd-resolved[1657]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:32:34.266274 systemd-resolved[1657]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:32:34.275951 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:32:34.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:32:35.178098 systemd-timesyncd[1658]: Contacted time server 157.245.125.229:123 (0.flatcar.pool.ntp.org). Dec 13 14:32:35.178257 systemd-timesyncd[1658]: Initial clock synchronization to Fri 2024-12-13 14:32:35.177927 UTC. Dec 13 14:32:35.240000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:32:35.240000 audit[1683]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdd9365c10 a2=420 a3=0 items=0 ppid=1653 pid=1683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:32:35.240000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:32:35.241927 augenrules[1683]: No rules Dec 13 14:32:35.244383 systemd[1]: Finished audit-rules.service. Dec 13 14:32:35.267340 systemd-resolved[1657]: Defaulting to hostname 'linux'. Dec 13 14:32:35.270722 systemd[1]: Started systemd-resolved.service. Dec 13 14:32:35.271914 systemd[1]: Reached target network.target. Dec 13 14:32:35.273780 systemd[1]: Reached target network-online.target. Dec 13 14:32:35.276789 systemd[1]: Reached target nss-lookup.target. Dec 13 14:32:35.610210 ldconfig[1548]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:32:35.621410 systemd[1]: Finished ldconfig.service. Dec 13 14:32:35.623976 systemd[1]: Starting systemd-update-done.service... Dec 13 14:32:35.633500 systemd[1]: Finished systemd-update-done.service. Dec 13 14:32:35.635106 systemd[1]: Reached target sysinit.target. Dec 13 14:32:35.636328 systemd[1]: Started motdgen.path. Dec 13 14:32:35.637282 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:32:35.638603 systemd[1]: Started logrotate.timer. Dec 13 14:32:35.639726 systemd[1]: Started mdadm.timer. Dec 13 14:32:35.640640 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:32:35.641621 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:32:35.641653 systemd[1]: Reached target paths.target. Dec 13 14:32:35.642528 systemd[1]: Reached target timers.target. Dec 13 14:32:35.644000 systemd[1]: Listening on dbus.socket. Dec 13 14:32:35.646116 systemd[1]: Starting docker.socket... Dec 13 14:32:35.653774 systemd[1]: Listening on sshd.socket. Dec 13 14:32:35.655440 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:35.656396 systemd[1]: Listening on docker.socket. Dec 13 14:32:35.657912 systemd[1]: Reached target sockets.target. Dec 13 14:32:35.659032 systemd[1]: Reached target basic.target. Dec 13 14:32:35.660077 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:32:35.660107 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:32:35.661249 systemd[1]: Started amazon-ssm-agent.service. Dec 13 14:32:35.664051 systemd[1]: Starting containerd.service... Dec 13 14:32:35.666682 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 14:32:35.669491 systemd[1]: Starting dbus.service... Dec 13 14:32:35.671626 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:32:35.682559 systemd[1]: Starting extend-filesystems.service... Dec 13 14:32:35.683907 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:32:35.712198 systemd[1]: Starting kubelet.service... Dec 13 14:32:35.723281 systemd[1]: Starting motdgen.service... Dec 13 14:32:35.726117 systemd[1]: Started nvidia.service. Dec 13 14:32:35.728856 systemd[1]: Starting prepare-helm.service... Dec 13 14:32:35.731407 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:32:35.734483 systemd[1]: Starting sshd-keygen.service... Dec 13 14:32:35.741705 systemd[1]: Starting systemd-logind.service... Dec 13 14:32:35.742750 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:32:35.742828 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:32:35.743998 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 14:32:35.745502 systemd[1]: Starting update-engine.service... Dec 13 14:32:35.748255 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:32:35.775635 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:32:35.775964 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:32:35.787611 extend-filesystems[1696]: Found loop1 Dec 13 14:32:35.789069 extend-filesystems[1696]: Found nvme0n1 Dec 13 14:32:35.789069 extend-filesystems[1696]: Found nvme0n1p1 Dec 13 14:32:35.789069 extend-filesystems[1696]: Found nvme0n1p2 Dec 13 14:32:35.789069 extend-filesystems[1696]: Found nvme0n1p3 Dec 13 14:32:35.789069 extend-filesystems[1696]: Found usr Dec 13 14:32:35.789069 extend-filesystems[1696]: Found nvme0n1p4 Dec 13 14:32:35.789069 extend-filesystems[1696]: Found nvme0n1p6 Dec 13 14:32:35.789069 extend-filesystems[1696]: Found nvme0n1p7 Dec 13 14:32:35.789069 extend-filesystems[1696]: Found nvme0n1p9 Dec 13 14:32:35.789069 extend-filesystems[1696]: Checking size of /dev/nvme0n1p9 Dec 13 14:32:35.876082 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:32:35.876368 systemd[1]: Finished motdgen.service. Dec 13 14:32:35.880883 jq[1707]: true Dec 13 14:32:35.886984 jq[1695]: false Dec 13 14:32:35.883706 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:32:35.883917 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:32:35.911858 env[1719]: time="2024-12-13T14:32:35.911807135Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:32:35.937862 env[1719]: time="2024-12-13T14:32:35.937813209Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:32:35.938121 env[1719]: time="2024-12-13T14:32:35.938104270Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:32:35.939750 env[1719]: time="2024-12-13T14:32:35.939721251Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:32:35.939837 env[1719]: time="2024-12-13T14:32:35.939824806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:32:35.940122 env[1719]: time="2024-12-13T14:32:35.940102872Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:32:35.940251 env[1719]: time="2024-12-13T14:32:35.940238148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:32:35.940338 env[1719]: time="2024-12-13T14:32:35.940310088Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:32:35.940383 env[1719]: time="2024-12-13T14:32:35.940335583Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:32:35.940468 env[1719]: time="2024-12-13T14:32:35.940447262Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:32:35.940703 env[1719]: time="2024-12-13T14:32:35.940676213Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:32:35.940956 env[1719]: time="2024-12-13T14:32:35.940847157Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:32:35.940956 env[1719]: time="2024-12-13T14:32:35.940950228Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:32:35.941064 env[1719]: time="2024-12-13T14:32:35.941018844Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:32:35.941064 env[1719]: time="2024-12-13T14:32:35.941036718Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:32:36.100199 jq[1728]: true Dec 13 14:32:36.423815 systemd-logind[1705]: Watching system buttons on /dev/input/event1 (Power Button) Dec 13 14:32:36.424196 systemd-logind[1705]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 14:32:36.424284 systemd-logind[1705]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 14:32:36.431070 extend-filesystems[1696]: Resized partition /dev/nvme0n1p9 Dec 13 14:32:36.424482 systemd-logind[1705]: New seat seat0. Dec 13 14:32:36.464408 tar[1711]: linux-amd64/helm Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.828124920Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.828199446Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.828220770Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.828372074Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.828399862Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.828422190Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.828457009Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.828478103Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.828531868Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.828554055Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.828573185Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.828591885Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.828883390Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:32:36.831262 env[1719]: time="2024-12-13T14:32:36.829022636Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.829518863Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.829554207Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.829587422Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.829723180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.829758074Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.829777792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.829795197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.829826723Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.829843575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.829860111Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.829874917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.829905222Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.830086156Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.830104788Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:32:36.832052 env[1719]: time="2024-12-13T14:32:36.830594360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:32:36.833895 env[1719]: time="2024-12-13T14:32:36.830668198Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:32:36.833895 env[1719]: time="2024-12-13T14:32:36.830695736Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:32:36.833895 env[1719]: time="2024-12-13T14:32:36.830713226Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:32:36.833895 env[1719]: time="2024-12-13T14:32:36.830751759Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:32:36.833895 env[1719]: time="2024-12-13T14:32:36.830795672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:32:36.834083 env[1719]: time="2024-12-13T14:32:36.831235345Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:32:36.834083 env[1719]: time="2024-12-13T14:32:36.832406446Z" level=info msg="Connect containerd service" Dec 13 14:32:36.834083 env[1719]: time="2024-12-13T14:32:36.832476742Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:32:36.932692 env[1719]: time="2024-12-13T14:32:36.834618635Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:32:36.932692 env[1719]: time="2024-12-13T14:32:36.834750820Z" level=info msg="Start subscribing containerd event" Dec 13 14:32:36.932692 env[1719]: time="2024-12-13T14:32:36.834806985Z" level=info msg="Start recovering state" Dec 13 14:32:36.932692 env[1719]: time="2024-12-13T14:32:36.834878432Z" level=info msg="Start event monitor" Dec 13 14:32:36.932692 env[1719]: time="2024-12-13T14:32:36.834897442Z" level=info msg="Start snapshots syncer" Dec 13 14:32:36.932692 env[1719]: time="2024-12-13T14:32:36.834909866Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:32:36.932692 env[1719]: time="2024-12-13T14:32:36.834921399Z" level=info msg="Start streaming server" Dec 13 14:32:36.932692 env[1719]: time="2024-12-13T14:32:36.835067937Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:32:36.932692 env[1719]: time="2024-12-13T14:32:36.835221329Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:32:36.932692 env[1719]: time="2024-12-13T14:32:36.864257702Z" level=info msg="containerd successfully booted in 0.953161s" Dec 13 14:32:36.933237 extend-filesystems[1756]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:32:36.862226 systemd[1]: Started containerd.service. Dec 13 14:32:36.934762 amazon-ssm-agent[1691]: 2024/12/13 14:32:36 Failed to load instance info from vault. RegistrationKey does not exist. Dec 13 14:32:36.934762 amazon-ssm-agent[1691]: Initializing new seelog logger Dec 13 14:32:36.934762 amazon-ssm-agent[1691]: New Seelog Logger Creation Complete Dec 13 14:32:36.934762 amazon-ssm-agent[1691]: 2024/12/13 14:32:36 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:32:36.934762 amazon-ssm-agent[1691]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Dec 13 14:32:36.934762 amazon-ssm-agent[1691]: 2024/12/13 14:32:36 processing appconfig overrides Dec 13 14:32:37.208465 tar[1711]: linux-amd64/LICENSE Dec 13 14:32:37.273961 tar[1711]: linux-amd64/README.md Dec 13 14:32:37.289172 systemd[1]: Finished prepare-helm.service. Dec 13 14:32:37.405074 systemd[1]: nvidia.service: Deactivated successfully. Dec 13 14:32:37.499559 bash[1747]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:32:37.500857 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:32:37.517292 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Dec 13 14:32:37.615912 dbus-daemon[1694]: [system] SELinux support is enabled Dec 13 14:32:37.616126 systemd[1]: Started dbus.service. Dec 13 14:32:37.618184 dbus-daemon[1694]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1441 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 14:32:37.619874 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:32:37.620700 dbus-daemon[1694]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 14:32:37.619913 systemd[1]: Reached target system-config.target. Dec 13 14:32:37.620962 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:32:37.620987 systemd[1]: Reached target user-config.target. Dec 13 14:32:37.635459 systemd[1]: Started systemd-logind.service. Dec 13 14:32:37.639442 systemd[1]: Starting systemd-hostnamed.service... Dec 13 14:32:37.748298 update_engine[1706]: I1213 14:32:37.746903 1706 main.cc:92] Flatcar Update Engine starting Dec 13 14:32:37.763081 coreos-metadata[1693]: Dec 13 14:32:37.762 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Dec 13 14:32:37.768195 systemd[1]: Started update-engine.service. Dec 13 14:32:37.768457 update_engine[1706]: I1213 14:32:37.768249 1706 update_check_scheduler.cc:74] Next update check in 7m10s Dec 13 14:32:37.772092 systemd[1]: Started locksmithd.service. Dec 13 14:32:37.804367 coreos-metadata[1693]: Dec 13 14:32:37.804 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Dec 13 14:32:37.804935 coreos-metadata[1693]: Dec 13 14:32:37.804 INFO Fetch successful Dec 13 14:32:37.804935 coreos-metadata[1693]: Dec 13 14:32:37.804 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 14:32:37.805666 coreos-metadata[1693]: Dec 13 14:32:37.805 INFO Fetch successful Dec 13 14:32:37.816402 dbus-daemon[1694]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 14:32:37.816583 systemd[1]: Started systemd-hostnamed.service. Dec 13 14:32:37.817031 dbus-daemon[1694]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1867 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 14:32:37.823155 systemd[1]: Starting polkit.service... Dec 13 14:32:37.883089 polkitd[1869]: Started polkitd version 121 Dec 13 14:32:38.036195 polkitd[1869]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 14:32:38.198230 polkitd[1869]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 14:32:38.227749 unknown[1693]: wrote ssh authorized keys file for user: core Dec 13 14:32:38.265319 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Dec 13 14:32:38.233031 systemd[1]: Started polkit.service. Dec 13 14:32:38.229413 polkitd[1869]: Finished loading, compiling and executing 2 rules Dec 13 14:32:38.231319 dbus-daemon[1694]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 14:32:38.304869 systemd-hostnamed[1867]: Hostname set to (transient) Dec 13 14:32:38.236959 polkitd[1869]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 14:32:38.304993 systemd-resolved[1657]: System hostname changed to 'ip-172-31-27-196'. Dec 13 14:32:38.343437 extend-filesystems[1756]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Dec 13 14:32:38.343437 extend-filesystems[1756]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:32:38.343437 extend-filesystems[1756]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Dec 13 14:32:38.351453 extend-filesystems[1696]: Resized filesystem in /dev/nvme0n1p9 Dec 13 14:32:38.347917 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:32:38.348226 systemd[1]: Finished extend-filesystems.service. Dec 13 14:32:38.356616 update-ssh-keys[1879]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:32:38.357319 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 14:32:38.426798 locksmithd[1868]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:32:38.447376 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO Create new startup processor Dec 13 14:32:38.449141 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [LongRunningPluginsManager] registered plugins: {} Dec 13 14:32:38.449263 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO Initializing bookkeeping folders Dec 13 14:32:38.449263 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO removing the completed state files Dec 13 14:32:38.449263 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO Initializing bookkeeping folders for long running plugins Dec 13 14:32:38.449263 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Dec 13 14:32:38.449263 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO Initializing healthcheck folders for long running plugins Dec 13 14:32:38.449263 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO Initializing locations for inventory plugin Dec 13 14:32:38.449669 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO Initializing default location for custom inventory Dec 13 14:32:38.449669 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO Initializing default location for file inventory Dec 13 14:32:38.449669 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO Initializing default location for role inventory Dec 13 14:32:38.449669 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO Init the cloudwatchlogs publisher Dec 13 14:32:38.449669 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [instanceID=i-0fa012ad98147c2d7] Successfully loaded platform independent plugin aws:softwareInventory Dec 13 14:32:38.449669 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [instanceID=i-0fa012ad98147c2d7] Successfully loaded platform independent plugin aws:runPowerShellScript Dec 13 14:32:38.449669 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [instanceID=i-0fa012ad98147c2d7] Successfully loaded platform independent plugin aws:downloadContent Dec 13 14:32:38.449669 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [instanceID=i-0fa012ad98147c2d7] Successfully loaded platform independent plugin aws:runDocument Dec 13 14:32:38.449669 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [instanceID=i-0fa012ad98147c2d7] Successfully loaded platform independent plugin aws:updateSsmAgent Dec 13 14:32:38.450234 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [instanceID=i-0fa012ad98147c2d7] Successfully loaded platform independent plugin aws:configureDocker Dec 13 14:32:38.450234 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [instanceID=i-0fa012ad98147c2d7] Successfully loaded platform independent plugin aws:runDockerAction Dec 13 14:32:38.450234 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [instanceID=i-0fa012ad98147c2d7] Successfully loaded platform independent plugin aws:refreshAssociation Dec 13 14:32:38.450234 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [instanceID=i-0fa012ad98147c2d7] Successfully loaded platform independent plugin aws:configurePackage Dec 13 14:32:38.450234 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [instanceID=i-0fa012ad98147c2d7] Successfully loaded platform dependent plugin aws:runShellScript Dec 13 14:32:38.450234 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Dec 13 14:32:38.450234 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO OS: linux, Arch: amd64 Dec 13 14:32:38.454929 amazon-ssm-agent[1691]: datastore file /var/lib/amazon/ssm/i-0fa012ad98147c2d7/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Dec 13 14:32:38.548008 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessagingDeliveryService] Starting document processing engine... Dec 13 14:32:38.642747 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessagingDeliveryService] [EngineProcessor] Starting Dec 13 14:32:38.737056 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Dec 13 14:32:38.831507 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessagingDeliveryService] Starting message polling Dec 13 14:32:38.927585 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessagingDeliveryService] Starting send replies to MDS Dec 13 14:32:39.022913 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [instanceID=i-0fa012ad98147c2d7] Starting association polling Dec 13 14:32:39.119157 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Dec 13 14:32:39.219281 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessagingDeliveryService] [Association] Launching response handler Dec 13 14:32:39.318281 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Dec 13 14:32:39.423066 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Dec 13 14:32:39.523087 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Dec 13 14:32:39.623279 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessageGatewayService] Starting session document processing engine... Dec 13 14:32:39.720154 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessageGatewayService] [EngineProcessor] Starting Dec 13 14:32:39.819303 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Dec 13 14:32:39.921135 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0fa012ad98147c2d7, requestId: 80a0fa76-0d41-4293-afe3-a46dabff1a0e Dec 13 14:32:40.017997 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [OfflineService] Starting document processing engine... Dec 13 14:32:40.116600 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [OfflineService] [EngineProcessor] Starting Dec 13 14:32:40.216974 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [OfflineService] [EngineProcessor] Initial processing Dec 13 14:32:40.318612 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [OfflineService] Starting message polling Dec 13 14:32:40.416801 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [OfflineService] Starting send replies to MDS Dec 13 14:32:40.516498 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [LongRunningPluginsManager] starting long running plugin manager Dec 13 14:32:40.616539 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Dec 13 14:32:40.694407 sshd_keygen[1713]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:32:40.718016 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [HealthCheck] HealthCheck reporting agent health. Dec 13 14:32:40.736620 systemd[1]: Finished sshd-keygen.service. Dec 13 14:32:40.740673 systemd[1]: Starting issuegen.service... Dec 13 14:32:40.759763 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:32:40.760013 systemd[1]: Finished issuegen.service. Dec 13 14:32:40.768286 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:32:40.781997 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:32:40.785132 systemd[1]: Started getty@tty1.service. Dec 13 14:32:40.788360 systemd[1]: Started serial-getty@ttyS0.service. Dec 13 14:32:40.789712 systemd[1]: Reached target getty.target. Dec 13 14:32:40.816489 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessageGatewayService] listening reply. Dec 13 14:32:40.837233 systemd[1]: Started kubelet.service. Dec 13 14:32:40.841053 systemd[1]: Reached target multi-user.target. Dec 13 14:32:40.846455 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:32:40.875863 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:32:40.876402 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:32:40.879701 systemd[1]: Startup finished in 673ms (kernel) + 7.177s (initrd) + 13.151s (userspace) = 21.002s. Dec 13 14:32:40.918981 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Dec 13 14:32:41.017930 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [StartupProcessor] Executing startup processor tasks Dec 13 14:32:41.118650 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Dec 13 14:32:41.217841 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Dec 13 14:32:41.317324 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.6 Dec 13 14:32:41.416973 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0fa012ad98147c2d7?role=subscribe&stream=input Dec 13 14:32:41.516809 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0fa012ad98147c2d7?role=subscribe&stream=input Dec 13 14:32:41.616770 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessageGatewayService] Starting receiving message from control channel Dec 13 14:32:41.717020 amazon-ssm-agent[1691]: 2024-12-13 14:32:38 INFO [MessageGatewayService] [EngineProcessor] Initial processing Dec 13 14:32:42.592726 amazon-ssm-agent[1691]: 2024-12-13 14:32:42 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Dec 13 14:32:42.619901 kubelet[1899]: E1213 14:32:42.619848 1899 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:32:42.621726 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:32:42.621921 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:32:42.622217 systemd[1]: kubelet.service: Consumed 1.147s CPU time. Dec 13 14:32:44.473939 systemd[1]: Created slice system-sshd.slice. Dec 13 14:32:44.480379 systemd[1]: Started sshd@0-172.31.27.196:22-139.178.89.65:60422.service. Dec 13 14:32:44.792905 sshd[1906]: Accepted publickey for core from 139.178.89.65 port 60422 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:32:44.851341 sshd[1906]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:44.939585 systemd[1]: Created slice user-500.slice. Dec 13 14:32:44.941531 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:32:44.946368 systemd-logind[1705]: New session 1 of user core. Dec 13 14:32:44.958959 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:32:44.962224 systemd[1]: Starting user@500.service... Dec 13 14:32:44.967051 (systemd)[1909]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:45.075984 systemd[1909]: Queued start job for default target default.target. Dec 13 14:32:45.076641 systemd[1909]: Reached target paths.target. Dec 13 14:32:45.076675 systemd[1909]: Reached target sockets.target. Dec 13 14:32:45.076693 systemd[1909]: Reached target timers.target. Dec 13 14:32:45.076752 systemd[1909]: Reached target basic.target. Dec 13 14:32:45.076812 systemd[1909]: Reached target default.target. Dec 13 14:32:45.076854 systemd[1909]: Startup finished in 101ms. Dec 13 14:32:45.078458 systemd[1]: Started user@500.service. Dec 13 14:32:45.080173 systemd[1]: Started session-1.scope. Dec 13 14:32:45.227823 systemd[1]: Started sshd@1-172.31.27.196:22-139.178.89.65:60430.service. Dec 13 14:32:45.407613 sshd[1918]: Accepted publickey for core from 139.178.89.65 port 60430 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:32:45.409426 sshd[1918]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:45.416027 systemd[1]: Started session-2.scope. Dec 13 14:32:45.416872 systemd-logind[1705]: New session 2 of user core. Dec 13 14:32:45.547533 sshd[1918]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:45.552142 systemd[1]: sshd@1-172.31.27.196:22-139.178.89.65:60430.service: Deactivated successfully. Dec 13 14:32:45.553088 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:32:45.553939 systemd-logind[1705]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:32:45.554857 systemd-logind[1705]: Removed session 2. Dec 13 14:32:45.572709 systemd[1]: Started sshd@2-172.31.27.196:22-139.178.89.65:60442.service. Dec 13 14:32:45.739670 sshd[1924]: Accepted publickey for core from 139.178.89.65 port 60442 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:32:45.741971 sshd[1924]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:45.747150 systemd-logind[1705]: New session 3 of user core. Dec 13 14:32:45.747701 systemd[1]: Started session-3.scope. Dec 13 14:32:45.867591 sshd[1924]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:45.872471 systemd[1]: sshd@2-172.31.27.196:22-139.178.89.65:60442.service: Deactivated successfully. Dec 13 14:32:45.873797 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:32:45.874864 systemd-logind[1705]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:32:45.876082 systemd-logind[1705]: Removed session 3. Dec 13 14:32:45.894912 systemd[1]: Started sshd@3-172.31.27.196:22-139.178.89.65:60444.service. Dec 13 14:32:46.066714 sshd[1930]: Accepted publickey for core from 139.178.89.65 port 60444 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:32:46.072544 sshd[1930]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:46.094470 systemd-logind[1705]: New session 4 of user core. Dec 13 14:32:46.095055 systemd[1]: Started session-4.scope. Dec 13 14:32:46.230089 sshd[1930]: pam_unix(sshd:session): session closed for user core Dec 13 14:32:46.233209 systemd[1]: sshd@3-172.31.27.196:22-139.178.89.65:60444.service: Deactivated successfully. Dec 13 14:32:46.234050 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:32:46.234718 systemd-logind[1705]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:32:46.235790 systemd-logind[1705]: Removed session 4. Dec 13 14:32:46.255489 systemd[1]: Started sshd@4-172.31.27.196:22-139.178.89.65:60452.service. Dec 13 14:32:46.421314 sshd[1936]: Accepted publickey for core from 139.178.89.65 port 60452 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:32:46.427034 sshd[1936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:32:46.445398 systemd[1]: Started session-5.scope. Dec 13 14:32:46.446480 systemd-logind[1705]: New session 5 of user core. Dec 13 14:32:46.598395 sudo[1939]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:32:46.598728 sudo[1939]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:32:46.629869 systemd[1]: Starting docker.service... Dec 13 14:32:46.683377 env[1949]: time="2024-12-13T14:32:46.682174033Z" level=info msg="Starting up" Dec 13 14:32:46.684862 env[1949]: time="2024-12-13T14:32:46.684837455Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:32:46.684981 env[1949]: time="2024-12-13T14:32:46.684967802Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:32:46.685051 env[1949]: time="2024-12-13T14:32:46.685038778Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:32:46.685116 env[1949]: time="2024-12-13T14:32:46.685103529Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:32:46.688812 env[1949]: time="2024-12-13T14:32:46.688780096Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:32:46.688974 env[1949]: time="2024-12-13T14:32:46.688960205Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:32:46.689050 env[1949]: time="2024-12-13T14:32:46.689036158Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:32:46.689107 env[1949]: time="2024-12-13T14:32:46.689097002Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:32:46.703389 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3811221616-merged.mount: Deactivated successfully. Dec 13 14:32:46.755957 env[1949]: time="2024-12-13T14:32:46.755920231Z" level=info msg="Loading containers: start." Dec 13 14:32:46.992299 kernel: Initializing XFRM netlink socket Dec 13 14:32:47.087325 env[1949]: time="2024-12-13T14:32:47.086753826Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:32:47.088747 (udev-worker)[1959]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:32:47.251778 systemd-networkd[1441]: docker0: Link UP Dec 13 14:32:47.269117 env[1949]: time="2024-12-13T14:32:47.269070758Z" level=info msg="Loading containers: done." Dec 13 14:32:47.291054 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck848356600-merged.mount: Deactivated successfully. Dec 13 14:32:47.300664 env[1949]: time="2024-12-13T14:32:47.300618456Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:32:47.300892 env[1949]: time="2024-12-13T14:32:47.300840877Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:32:47.300983 env[1949]: time="2024-12-13T14:32:47.300963575Z" level=info msg="Daemon has completed initialization" Dec 13 14:32:47.317781 systemd[1]: Started docker.service. Dec 13 14:32:47.331580 env[1949]: time="2024-12-13T14:32:47.331516272Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:32:48.613936 env[1719]: time="2024-12-13T14:32:48.613763098Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\"" Dec 13 14:32:49.303878 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3713463599.mount: Deactivated successfully. Dec 13 14:32:51.790555 env[1719]: time="2024-12-13T14:32:51.790500572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:51.792947 env[1719]: time="2024-12-13T14:32:51.792906213Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:51.795134 env[1719]: time="2024-12-13T14:32:51.795098191Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:51.797085 env[1719]: time="2024-12-13T14:32:51.797052927Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:ace6a943b058439bd6daeb74f152e7c36e6fc0b5e481cdff9364cd6ca0473e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:51.797955 env[1719]: time="2024-12-13T14:32:51.797919558Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.4\" returns image reference \"sha256:bdc2eadbf366279693097982a31da61cc2f1d90f07ada3f4b3b91251a18f665e\"" Dec 13 14:32:51.800484 env[1719]: time="2024-12-13T14:32:51.800454866Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\"" Dec 13 14:32:52.853652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:32:52.854000 systemd[1]: Stopped kubelet.service. Dec 13 14:32:52.854060 systemd[1]: kubelet.service: Consumed 1.147s CPU time. Dec 13 14:32:52.856515 systemd[1]: Starting kubelet.service... Dec 13 14:32:53.210610 systemd[1]: Started kubelet.service. Dec 13 14:32:53.269411 kubelet[2074]: E1213 14:32:53.269364 2074 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:32:53.272788 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:32:53.272915 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:32:54.513610 env[1719]: time="2024-12-13T14:32:54.513557528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:54.520375 env[1719]: time="2024-12-13T14:32:54.520330065Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:54.524368 env[1719]: time="2024-12-13T14:32:54.524326771Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:54.527785 env[1719]: time="2024-12-13T14:32:54.527744597Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4bd1d4a449e7a1a4f375bd7c71abf48a95f8949b38f725ded255077329f21f7b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:54.528584 env[1719]: time="2024-12-13T14:32:54.528547623Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.4\" returns image reference \"sha256:359b9f2307326a4c66172318ca63ee9792c3146ca57d53329239bd123ea70079\"" Dec 13 14:32:54.529225 env[1719]: time="2024-12-13T14:32:54.529198404Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\"" Dec 13 14:32:56.695635 env[1719]: time="2024-12-13T14:32:56.695573524Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:56.704227 env[1719]: time="2024-12-13T14:32:56.704168444Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:56.710595 env[1719]: time="2024-12-13T14:32:56.710549463Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:56.712820 env[1719]: time="2024-12-13T14:32:56.712778964Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:1a3081cb7d21763d22eb2c0781cc462d89f501ed523ad558dea1226f128fbfdd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:56.713640 env[1719]: time="2024-12-13T14:32:56.713604229Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.4\" returns image reference \"sha256:3a66234066fe10fa299c0a52265f90a107450f0372652867118cd9007940d674\"" Dec 13 14:32:56.714326 env[1719]: time="2024-12-13T14:32:56.714301157Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\"" Dec 13 14:32:57.997895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1398042596.mount: Deactivated successfully. Dec 13 14:32:58.730693 env[1719]: time="2024-12-13T14:32:58.730640900Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:58.732722 env[1719]: time="2024-12-13T14:32:58.732671781Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:58.735818 env[1719]: time="2024-12-13T14:32:58.735753121Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:58.737353 env[1719]: time="2024-12-13T14:32:58.737314327Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:1739b3febca392035bf6edfe31efdfa55226be7b57389b2001ae357f7dcb99cf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:32:58.737718 env[1719]: time="2024-12-13T14:32:58.737685807Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.4\" returns image reference \"sha256:ebf80573666f86f115452db568feb34f6f771c3bdc7bfed14b9577f992cfa300\"" Dec 13 14:32:58.738534 env[1719]: time="2024-12-13T14:32:58.738502688Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:32:59.276401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3216096806.mount: Deactivated successfully. Dec 13 14:33:00.424197 env[1719]: time="2024-12-13T14:33:00.424135134Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:00.428701 env[1719]: time="2024-12-13T14:33:00.427729568Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:00.439428 env[1719]: time="2024-12-13T14:33:00.439368561Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:00.449669 env[1719]: time="2024-12-13T14:33:00.449618477Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 14:33:00.450296 env[1719]: time="2024-12-13T14:33:00.449781047Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:00.450881 env[1719]: time="2024-12-13T14:33:00.450850495Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Dec 13 14:33:00.981699 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2820555275.mount: Deactivated successfully. Dec 13 14:33:00.993940 env[1719]: time="2024-12-13T14:33:00.993885722Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:00.996004 env[1719]: time="2024-12-13T14:33:00.995965108Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:00.998210 env[1719]: time="2024-12-13T14:33:00.998174692Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:01.000087 env[1719]: time="2024-12-13T14:33:01.000051495Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:01.000718 env[1719]: time="2024-12-13T14:33:01.000680754Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Dec 13 14:33:01.001440 env[1719]: time="2024-12-13T14:33:01.001415783Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Dec 13 14:33:01.544637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1794808916.mount: Deactivated successfully. Dec 13 14:33:03.353579 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:33:03.353840 systemd[1]: Stopped kubelet.service. Dec 13 14:33:03.355922 systemd[1]: Starting kubelet.service... Dec 13 14:33:03.584565 systemd[1]: Started kubelet.service. Dec 13 14:33:03.677971 kubelet[2084]: E1213 14:33:03.677046 2084 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:33:03.679459 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:33:03.679625 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:33:04.953153 env[1719]: time="2024-12-13T14:33:04.953097404Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:04.956231 env[1719]: time="2024-12-13T14:33:04.956187784Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:04.959090 env[1719]: time="2024-12-13T14:33:04.959053247Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:04.962086 env[1719]: time="2024-12-13T14:33:04.962049251Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:04.962998 env[1719]: time="2024-12-13T14:33:04.962958680Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Dec 13 14:33:08.323172 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 14:33:09.451412 systemd[1]: Stopped kubelet.service. Dec 13 14:33:09.454144 systemd[1]: Starting kubelet.service... Dec 13 14:33:09.501149 systemd[1]: Reloading. Dec 13 14:33:09.685554 /usr/lib/systemd/system-generators/torcx-generator[2135]: time="2024-12-13T14:33:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:33:09.685593 /usr/lib/systemd/system-generators/torcx-generator[2135]: time="2024-12-13T14:33:09Z" level=info msg="torcx already run" Dec 13 14:33:09.812867 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:33:09.812893 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:33:09.839589 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:33:09.968005 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 14:33:09.968079 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 14:33:09.968314 systemd[1]: Stopped kubelet.service. Dec 13 14:33:09.971069 systemd[1]: Starting kubelet.service... Dec 13 14:33:10.161904 systemd[1]: Started kubelet.service. Dec 13 14:33:10.225088 kubelet[2192]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:33:10.225496 kubelet[2192]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:33:10.225496 kubelet[2192]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:33:10.225496 kubelet[2192]: I1213 14:33:10.225400 2192 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:33:10.591046 kubelet[2192]: I1213 14:33:10.590677 2192 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:33:10.591046 kubelet[2192]: I1213 14:33:10.590711 2192 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:33:10.591406 kubelet[2192]: I1213 14:33:10.591373 2192 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:33:10.656435 kubelet[2192]: I1213 14:33:10.656401 2192 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:33:10.656973 kubelet[2192]: E1213 14:33:10.656933 2192 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.27.196:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.196:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:33:10.670943 kubelet[2192]: E1213 14:33:10.670893 2192 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:33:10.670943 kubelet[2192]: I1213 14:33:10.670940 2192 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:33:10.676573 kubelet[2192]: I1213 14:33:10.676546 2192 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:33:10.676871 kubelet[2192]: I1213 14:33:10.676684 2192 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:33:10.677102 kubelet[2192]: I1213 14:33:10.677052 2192 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:33:10.678053 kubelet[2192]: I1213 14:33:10.677108 2192 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-196","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:33:10.678299 kubelet[2192]: I1213 14:33:10.678065 2192 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:33:10.678299 kubelet[2192]: I1213 14:33:10.678080 2192 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:33:10.678299 kubelet[2192]: I1213 14:33:10.678239 2192 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:33:10.692125 kubelet[2192]: I1213 14:33:10.692063 2192 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:33:10.692125 kubelet[2192]: I1213 14:33:10.692134 2192 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:33:10.692474 kubelet[2192]: I1213 14:33:10.692180 2192 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:33:10.692474 kubelet[2192]: I1213 14:33:10.692198 2192 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:33:10.722656 kubelet[2192]: I1213 14:33:10.722170 2192 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:33:10.733755 kubelet[2192]: W1213 14:33:10.733675 2192 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-196&limit=500&resourceVersion=0": dial tcp 172.31.27.196:6443: connect: connection refused Dec 13 14:33:10.733918 kubelet[2192]: E1213 14:33:10.733772 2192 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.27.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-196&limit=500&resourceVersion=0\": dial tcp 172.31.27.196:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:33:10.733918 kubelet[2192]: I1213 14:33:10.733852 2192 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:33:10.736892 kubelet[2192]: W1213 14:33:10.736806 2192 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:33:10.741760 kubelet[2192]: W1213 14:33:10.741700 2192 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.196:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.27.196:6443: connect: connection refused Dec 13 14:33:10.741902 kubelet[2192]: E1213 14:33:10.741770 2192 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.27.196:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.196:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:33:10.742181 kubelet[2192]: I1213 14:33:10.742156 2192 server.go:1269] "Started kubelet" Dec 13 14:33:10.754344 kubelet[2192]: I1213 14:33:10.754293 2192 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:33:10.755621 kubelet[2192]: I1213 14:33:10.755596 2192 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:33:10.755842 kubelet[2192]: I1213 14:33:10.755573 2192 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:33:10.757996 kubelet[2192]: I1213 14:33:10.757971 2192 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:33:10.763763 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 14:33:10.763899 kubelet[2192]: I1213 14:33:10.762887 2192 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:33:10.764385 kubelet[2192]: E1213 14:33:10.760654 2192 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.27.196:6443/api/v1/namespaces/default/events\": dial tcp 172.31.27.196:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-27-196.1810c31e230d05c5 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-27-196,UID:ip-172-31-27-196,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-27-196,},FirstTimestamp:2024-12-13 14:33:10.742132165 +0000 UTC m=+0.574735696,LastTimestamp:2024-12-13 14:33:10.742132165 +0000 UTC m=+0.574735696,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-27-196,}" Dec 13 14:33:10.767164 kubelet[2192]: E1213 14:33:10.767140 2192 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:33:10.767595 kubelet[2192]: I1213 14:33:10.767578 2192 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:33:10.773443 kubelet[2192]: E1213 14:33:10.773411 2192 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ip-172-31-27-196\" not found" Dec 13 14:33:10.773850 kubelet[2192]: I1213 14:33:10.773837 2192 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:33:10.774091 kubelet[2192]: I1213 14:33:10.774078 2192 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:33:10.774289 kubelet[2192]: I1213 14:33:10.774258 2192 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:33:10.774757 kubelet[2192]: E1213 14:33:10.774725 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-196?timeout=10s\": dial tcp 172.31.27.196:6443: connect: connection refused" interval="200ms" Dec 13 14:33:10.775630 kubelet[2192]: I1213 14:33:10.775609 2192 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:33:10.775854 kubelet[2192]: I1213 14:33:10.775835 2192 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:33:10.776435 kubelet[2192]: W1213 14:33:10.776391 2192 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.196:6443: connect: connection refused Dec 13 14:33:10.776568 kubelet[2192]: E1213 14:33:10.776547 2192 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.27.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.196:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:33:10.778385 kubelet[2192]: I1213 14:33:10.778367 2192 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:33:10.807844 kubelet[2192]: I1213 14:33:10.807818 2192 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:33:10.807844 kubelet[2192]: I1213 14:33:10.807841 2192 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:33:10.808056 kubelet[2192]: I1213 14:33:10.807859 2192 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:33:10.810600 kubelet[2192]: I1213 14:33:10.810578 2192 policy_none.go:49] "None policy: Start" Dec 13 14:33:10.811054 kubelet[2192]: I1213 14:33:10.811029 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:33:10.812903 kubelet[2192]: I1213 14:33:10.812877 2192 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:33:10.813064 kubelet[2192]: I1213 14:33:10.813052 2192 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:33:10.818228 kubelet[2192]: I1213 14:33:10.818201 2192 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:33:10.818419 kubelet[2192]: I1213 14:33:10.818399 2192 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:33:10.818503 kubelet[2192]: I1213 14:33:10.818430 2192 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:33:10.818503 kubelet[2192]: E1213 14:33:10.818481 2192 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:33:10.824288 kubelet[2192]: W1213 14:33:10.821553 2192 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.196:6443: connect: connection refused Dec 13 14:33:10.824288 kubelet[2192]: E1213 14:33:10.821620 2192 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.27.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.196:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:33:10.824774 systemd[1]: Created slice kubepods.slice. Dec 13 14:33:10.835671 systemd[1]: Created slice kubepods-burstable.slice. Dec 13 14:33:10.841067 systemd[1]: Created slice kubepods-besteffort.slice. Dec 13 14:33:10.852556 kubelet[2192]: I1213 14:33:10.852498 2192 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:33:10.852832 kubelet[2192]: I1213 14:33:10.852814 2192 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:33:10.852920 kubelet[2192]: I1213 14:33:10.852836 2192 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:33:10.855102 kubelet[2192]: I1213 14:33:10.855082 2192 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:33:10.859368 kubelet[2192]: E1213 14:33:10.859338 2192 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-27-196\" not found" Dec 13 14:33:10.932375 systemd[1]: Created slice kubepods-burstable-poda3558864ffd71ee55a0ecfc680d667f3.slice. Dec 13 14:33:10.945590 systemd[1]: Created slice kubepods-burstable-pod737cab1ed0fb1c664e76023f05fe37c4.slice. Dec 13 14:33:10.952636 systemd[1]: Created slice kubepods-burstable-podb6fe5b9169d0647dc5e5525b1a577a32.slice. Dec 13 14:33:10.958238 kubelet[2192]: I1213 14:33:10.958199 2192 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-196" Dec 13 14:33:10.958810 kubelet[2192]: E1213 14:33:10.958780 2192 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.196:6443/api/v1/nodes\": dial tcp 172.31.27.196:6443: connect: connection refused" node="ip-172-31-27-196" Dec 13 14:33:10.976773 kubelet[2192]: E1213 14:33:10.976523 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-196?timeout=10s\": dial tcp 172.31.27.196:6443: connect: connection refused" interval="400ms" Dec 13 14:33:11.077219 kubelet[2192]: I1213 14:33:11.077159 2192 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/737cab1ed0fb1c664e76023f05fe37c4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-196\" (UID: \"737cab1ed0fb1c664e76023f05fe37c4\") " pod="kube-system/kube-controller-manager-ip-172-31-27-196" Dec 13 14:33:11.077219 kubelet[2192]: I1213 14:33:11.077194 2192 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3558864ffd71ee55a0ecfc680d667f3-ca-certs\") pod \"kube-apiserver-ip-172-31-27-196\" (UID: \"a3558864ffd71ee55a0ecfc680d667f3\") " pod="kube-system/kube-apiserver-ip-172-31-27-196" Dec 13 14:33:11.077219 kubelet[2192]: I1213 14:33:11.077217 2192 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3558864ffd71ee55a0ecfc680d667f3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-196\" (UID: \"a3558864ffd71ee55a0ecfc680d667f3\") " pod="kube-system/kube-apiserver-ip-172-31-27-196" Dec 13 14:33:11.077458 kubelet[2192]: I1213 14:33:11.077239 2192 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/737cab1ed0fb1c664e76023f05fe37c4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-196\" (UID: \"737cab1ed0fb1c664e76023f05fe37c4\") " pod="kube-system/kube-controller-manager-ip-172-31-27-196" Dec 13 14:33:11.077458 kubelet[2192]: I1213 14:33:11.077257 2192 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/737cab1ed0fb1c664e76023f05fe37c4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-196\" (UID: \"737cab1ed0fb1c664e76023f05fe37c4\") " pod="kube-system/kube-controller-manager-ip-172-31-27-196" Dec 13 14:33:11.077458 kubelet[2192]: I1213 14:33:11.077292 2192 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/737cab1ed0fb1c664e76023f05fe37c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-196\" (UID: \"737cab1ed0fb1c664e76023f05fe37c4\") " pod="kube-system/kube-controller-manager-ip-172-31-27-196" Dec 13 14:33:11.077458 kubelet[2192]: I1213 14:33:11.077313 2192 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6fe5b9169d0647dc5e5525b1a577a32-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-196\" (UID: \"b6fe5b9169d0647dc5e5525b1a577a32\") " pod="kube-system/kube-scheduler-ip-172-31-27-196" Dec 13 14:33:11.077458 kubelet[2192]: I1213 14:33:11.077327 2192 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3558864ffd71ee55a0ecfc680d667f3-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-196\" (UID: \"a3558864ffd71ee55a0ecfc680d667f3\") " pod="kube-system/kube-apiserver-ip-172-31-27-196" Dec 13 14:33:11.077594 kubelet[2192]: I1213 14:33:11.077347 2192 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/737cab1ed0fb1c664e76023f05fe37c4-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-196\" (UID: \"737cab1ed0fb1c664e76023f05fe37c4\") " pod="kube-system/kube-controller-manager-ip-172-31-27-196" Dec 13 14:33:11.161006 kubelet[2192]: I1213 14:33:11.160973 2192 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-196" Dec 13 14:33:11.161388 kubelet[2192]: E1213 14:33:11.161360 2192 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.196:6443/api/v1/nodes\": dial tcp 172.31.27.196:6443: connect: connection refused" node="ip-172-31-27-196" Dec 13 14:33:11.245353 env[1719]: time="2024-12-13T14:33:11.245305481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-196,Uid:a3558864ffd71ee55a0ecfc680d667f3,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:11.252137 env[1719]: time="2024-12-13T14:33:11.252091300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-196,Uid:737cab1ed0fb1c664e76023f05fe37c4,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:11.256425 env[1719]: time="2024-12-13T14:33:11.256385551Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-196,Uid:b6fe5b9169d0647dc5e5525b1a577a32,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:11.378148 kubelet[2192]: E1213 14:33:11.378096 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-196?timeout=10s\": dial tcp 172.31.27.196:6443: connect: connection refused" interval="800ms" Dec 13 14:33:11.563899 kubelet[2192]: I1213 14:33:11.563793 2192 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-196" Dec 13 14:33:11.565205 kubelet[2192]: E1213 14:33:11.564955 2192 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.196:6443/api/v1/nodes\": dial tcp 172.31.27.196:6443: connect: connection refused" node="ip-172-31-27-196" Dec 13 14:33:11.577366 kubelet[2192]: W1213 14:33:11.577303 2192 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.27.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-196&limit=500&resourceVersion=0": dial tcp 172.31.27.196:6443: connect: connection refused Dec 13 14:33:11.577503 kubelet[2192]: E1213 14:33:11.577377 2192 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.27.196:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-27-196&limit=500&resourceVersion=0\": dial tcp 172.31.27.196:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:33:11.743176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4137458129.mount: Deactivated successfully. Dec 13 14:33:11.750242 env[1719]: time="2024-12-13T14:33:11.750186771Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:11.752832 env[1719]: time="2024-12-13T14:33:11.752783080Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:11.757474 env[1719]: time="2024-12-13T14:33:11.757420762Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:11.758372 env[1719]: time="2024-12-13T14:33:11.758337807Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:11.759971 env[1719]: time="2024-12-13T14:33:11.759928536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:11.763447 env[1719]: time="2024-12-13T14:33:11.763370898Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:11.764338 env[1719]: time="2024-12-13T14:33:11.764304731Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:11.765054 env[1719]: time="2024-12-13T14:33:11.765027216Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:11.767499 env[1719]: time="2024-12-13T14:33:11.767459892Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:11.768392 env[1719]: time="2024-12-13T14:33:11.768363562Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:11.771238 env[1719]: time="2024-12-13T14:33:11.771195326Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:11.791946 env[1719]: time="2024-12-13T14:33:11.791903804Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:11.811810 env[1719]: time="2024-12-13T14:33:11.811592657Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:11.811810 env[1719]: time="2024-12-13T14:33:11.811641464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:11.811810 env[1719]: time="2024-12-13T14:33:11.811654943Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:11.813739 env[1719]: time="2024-12-13T14:33:11.813612138Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dcb8b6c01d612df67a00c78da856eaa2ed48f121d926fce8482d8cc84562b18a pid=2229 runtime=io.containerd.runc.v2 Dec 13 14:33:11.838043 env[1719]: time="2024-12-13T14:33:11.836225793Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:11.838043 env[1719]: time="2024-12-13T14:33:11.836633519Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:11.838043 env[1719]: time="2024-12-13T14:33:11.836672573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:11.838043 env[1719]: time="2024-12-13T14:33:11.836914566Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0164cd4a03b9162cd409359ff3eac3d08fab9c87f0b73579f4104fa110d5d158 pid=2244 runtime=io.containerd.runc.v2 Dec 13 14:33:11.855063 env[1719]: time="2024-12-13T14:33:11.854805282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:11.855249 env[1719]: time="2024-12-13T14:33:11.855096078Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:11.855249 env[1719]: time="2024-12-13T14:33:11.855137764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:11.855457 env[1719]: time="2024-12-13T14:33:11.855418132Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/79fc9174b317cf145667f607d51d86d04014f642974d44bf6f245621c069757c pid=2259 runtime=io.containerd.runc.v2 Dec 13 14:33:11.867398 systemd[1]: Started cri-containerd-dcb8b6c01d612df67a00c78da856eaa2ed48f121d926fce8482d8cc84562b18a.scope. Dec 13 14:33:11.872063 systemd[1]: Started cri-containerd-0164cd4a03b9162cd409359ff3eac3d08fab9c87f0b73579f4104fa110d5d158.scope. Dec 13 14:33:11.925334 systemd[1]: Started cri-containerd-79fc9174b317cf145667f607d51d86d04014f642974d44bf6f245621c069757c.scope. Dec 13 14:33:12.013080 kubelet[2192]: W1213 14:33:12.012926 2192 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.27.196:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.27.196:6443: connect: connection refused Dec 13 14:33:12.013080 kubelet[2192]: E1213 14:33:12.013028 2192 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.27.196:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.27.196:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:33:12.015139 env[1719]: time="2024-12-13T14:33:12.015087786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-27-196,Uid:a3558864ffd71ee55a0ecfc680d667f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcb8b6c01d612df67a00c78da856eaa2ed48f121d926fce8482d8cc84562b18a\"" Dec 13 14:33:12.029937 env[1719]: time="2024-12-13T14:33:12.029893377Z" level=info msg="CreateContainer within sandbox \"dcb8b6c01d612df67a00c78da856eaa2ed48f121d926fce8482d8cc84562b18a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:33:12.050844 env[1719]: time="2024-12-13T14:33:12.050790519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-27-196,Uid:b6fe5b9169d0647dc5e5525b1a577a32,Namespace:kube-system,Attempt:0,} returns sandbox id \"0164cd4a03b9162cd409359ff3eac3d08fab9c87f0b73579f4104fa110d5d158\"" Dec 13 14:33:12.061011 env[1719]: time="2024-12-13T14:33:12.060108849Z" level=info msg="CreateContainer within sandbox \"0164cd4a03b9162cd409359ff3eac3d08fab9c87f0b73579f4104fa110d5d158\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:33:12.064058 env[1719]: time="2024-12-13T14:33:12.063996369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-27-196,Uid:737cab1ed0fb1c664e76023f05fe37c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"79fc9174b317cf145667f607d51d86d04014f642974d44bf6f245621c069757c\"" Dec 13 14:33:12.067160 env[1719]: time="2024-12-13T14:33:12.067118401Z" level=info msg="CreateContainer within sandbox \"79fc9174b317cf145667f607d51d86d04014f642974d44bf6f245621c069757c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:33:12.070001 env[1719]: time="2024-12-13T14:33:12.069951259Z" level=info msg="CreateContainer within sandbox \"dcb8b6c01d612df67a00c78da856eaa2ed48f121d926fce8482d8cc84562b18a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ff6d6d5916b33465556136def8813f0af0241ecd03009f45da03b6cd6a8dd3b5\"" Dec 13 14:33:12.071222 env[1719]: time="2024-12-13T14:33:12.071198206Z" level=info msg="StartContainer for \"ff6d6d5916b33465556136def8813f0af0241ecd03009f45da03b6cd6a8dd3b5\"" Dec 13 14:33:12.087464 env[1719]: time="2024-12-13T14:33:12.087417467Z" level=info msg="CreateContainer within sandbox \"0164cd4a03b9162cd409359ff3eac3d08fab9c87f0b73579f4104fa110d5d158\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"40cd5f8199962dda183cba8b82f64bc88fe53a5776a5a9ff7ba05af877628380\"" Dec 13 14:33:12.088712 env[1719]: time="2024-12-13T14:33:12.088614619Z" level=info msg="StartContainer for \"40cd5f8199962dda183cba8b82f64bc88fe53a5776a5a9ff7ba05af877628380\"" Dec 13 14:33:12.093243 env[1719]: time="2024-12-13T14:33:12.093193294Z" level=info msg="CreateContainer within sandbox \"79fc9174b317cf145667f607d51d86d04014f642974d44bf6f245621c069757c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"bf0c9cf888ea99a16ab5858c1e69cb9a329464555242e50dc85ffd64102e2760\"" Dec 13 14:33:12.093999 env[1719]: time="2024-12-13T14:33:12.093965540Z" level=info msg="StartContainer for \"bf0c9cf888ea99a16ab5858c1e69cb9a329464555242e50dc85ffd64102e2760\"" Dec 13 14:33:12.108441 systemd[1]: Started cri-containerd-ff6d6d5916b33465556136def8813f0af0241ecd03009f45da03b6cd6a8dd3b5.scope. Dec 13 14:33:12.132121 systemd[1]: Started cri-containerd-bf0c9cf888ea99a16ab5858c1e69cb9a329464555242e50dc85ffd64102e2760.scope. Dec 13 14:33:12.149618 systemd[1]: Started cri-containerd-40cd5f8199962dda183cba8b82f64bc88fe53a5776a5a9ff7ba05af877628380.scope. Dec 13 14:33:12.179550 kubelet[2192]: E1213 14:33:12.179462 2192 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.27.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-196?timeout=10s\": dial tcp 172.31.27.196:6443: connect: connection refused" interval="1.6s" Dec 13 14:33:12.241110 env[1719]: time="2024-12-13T14:33:12.241030849Z" level=info msg="StartContainer for \"ff6d6d5916b33465556136def8813f0af0241ecd03009f45da03b6cd6a8dd3b5\" returns successfully" Dec 13 14:33:12.267830 env[1719]: time="2024-12-13T14:33:12.267779965Z" level=info msg="StartContainer for \"bf0c9cf888ea99a16ab5858c1e69cb9a329464555242e50dc85ffd64102e2760\" returns successfully" Dec 13 14:33:12.291917 env[1719]: time="2024-12-13T14:33:12.291866656Z" level=info msg="StartContainer for \"40cd5f8199962dda183cba8b82f64bc88fe53a5776a5a9ff7ba05af877628380\" returns successfully" Dec 13 14:33:12.337284 kubelet[2192]: W1213 14:33:12.337146 2192 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.27.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.27.196:6443: connect: connection refused Dec 13 14:33:12.337284 kubelet[2192]: E1213 14:33:12.337227 2192 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.27.196:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.27.196:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:33:12.367355 kubelet[2192]: I1213 14:33:12.366951 2192 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-196" Dec 13 14:33:12.367355 kubelet[2192]: E1213 14:33:12.367306 2192 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.31.27.196:6443/api/v1/nodes\": dial tcp 172.31.27.196:6443: connect: connection refused" node="ip-172-31-27-196" Dec 13 14:33:12.386573 kubelet[2192]: W1213 14:33:12.386461 2192 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.27.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.27.196:6443: connect: connection refused Dec 13 14:33:12.386573 kubelet[2192]: E1213 14:33:12.386543 2192 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.27.196:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.27.196:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:33:12.625553 amazon-ssm-agent[1691]: 2024-12-13 14:33:12 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Dec 13 14:33:12.767671 kubelet[2192]: E1213 14:33:12.767627 2192 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.27.196:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.27.196:6443: connect: connection refused" logger="UnhandledError" Dec 13 14:33:13.969317 kubelet[2192]: I1213 14:33:13.969289 2192 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-196" Dec 13 14:33:15.419792 kubelet[2192]: E1213 14:33:15.419753 2192 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-27-196\" not found" node="ip-172-31-27-196" Dec 13 14:33:15.557261 kubelet[2192]: I1213 14:33:15.557229 2192 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-27-196" Dec 13 14:33:15.738872 kubelet[2192]: I1213 14:33:15.738734 2192 apiserver.go:52] "Watching apiserver" Dec 13 14:33:15.774368 kubelet[2192]: I1213 14:33:15.774340 2192 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 14:33:17.669083 systemd[1]: Reloading. Dec 13 14:33:17.789314 /usr/lib/systemd/system-generators/torcx-generator[2481]: time="2024-12-13T14:33:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:33:17.789351 /usr/lib/systemd/system-generators/torcx-generator[2481]: time="2024-12-13T14:33:17Z" level=info msg="torcx already run" Dec 13 14:33:17.958881 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:33:17.958905 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:33:17.987158 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:33:18.134069 systemd[1]: Stopping kubelet.service... Dec 13 14:33:18.154039 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:33:18.154300 systemd[1]: Stopped kubelet.service. Dec 13 14:33:18.157197 systemd[1]: Starting kubelet.service... Dec 13 14:33:19.638490 systemd[1]: Started kubelet.service. Dec 13 14:33:19.803519 kubelet[2538]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:33:19.804128 kubelet[2538]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:33:19.804296 kubelet[2538]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:33:19.806685 kubelet[2538]: I1213 14:33:19.806257 2538 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:33:19.833893 kubelet[2538]: I1213 14:33:19.833820 2538 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Dec 13 14:33:19.834378 kubelet[2538]: I1213 14:33:19.834360 2538 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:33:19.834932 kubelet[2538]: I1213 14:33:19.834847 2538 server.go:929] "Client rotation is on, will bootstrap in background" Dec 13 14:33:19.838848 kubelet[2538]: I1213 14:33:19.838824 2538 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:33:19.841681 sudo[2550]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 14:33:19.842234 sudo[2550]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 14:33:19.847751 kubelet[2538]: I1213 14:33:19.847665 2538 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:33:19.859594 kubelet[2538]: E1213 14:33:19.859543 2538 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Dec 13 14:33:19.859594 kubelet[2538]: I1213 14:33:19.859590 2538 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Dec 13 14:33:19.866741 kubelet[2538]: I1213 14:33:19.865790 2538 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:33:19.866741 kubelet[2538]: I1213 14:33:19.865986 2538 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Dec 13 14:33:19.866741 kubelet[2538]: I1213 14:33:19.866227 2538 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:33:19.866741 kubelet[2538]: I1213 14:33:19.866327 2538 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-27-196","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Dec 13 14:33:19.867050 kubelet[2538]: I1213 14:33:19.866618 2538 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:33:19.867050 kubelet[2538]: I1213 14:33:19.866632 2538 container_manager_linux.go:300] "Creating device plugin manager" Dec 13 14:33:19.867050 kubelet[2538]: I1213 14:33:19.866684 2538 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:33:19.877298 kubelet[2538]: I1213 14:33:19.868546 2538 kubelet.go:408] "Attempting to sync node with API server" Dec 13 14:33:19.877298 kubelet[2538]: I1213 14:33:19.868611 2538 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:33:19.877298 kubelet[2538]: I1213 14:33:19.868706 2538 kubelet.go:314] "Adding apiserver pod source" Dec 13 14:33:19.877298 kubelet[2538]: I1213 14:33:19.868782 2538 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:33:19.922565 kubelet[2538]: I1213 14:33:19.913638 2538 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:33:19.922565 kubelet[2538]: I1213 14:33:19.914807 2538 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:33:19.930903 kubelet[2538]: I1213 14:33:19.929210 2538 server.go:1269] "Started kubelet" Dec 13 14:33:19.950116 kubelet[2538]: I1213 14:33:19.950093 2538 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:33:19.958740 kubelet[2538]: I1213 14:33:19.958702 2538 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:33:19.959295 kubelet[2538]: E1213 14:33:19.959253 2538 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:33:19.959440 kubelet[2538]: I1213 14:33:19.959393 2538 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:33:19.959884 kubelet[2538]: I1213 14:33:19.959865 2538 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:33:19.960392 kubelet[2538]: I1213 14:33:19.960372 2538 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Dec 13 14:33:19.960941 kubelet[2538]: I1213 14:33:19.960926 2538 server.go:460] "Adding debug handlers to kubelet server" Dec 13 14:33:19.973250 kubelet[2538]: I1213 14:33:19.973207 2538 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:33:19.974821 kubelet[2538]: I1213 14:33:19.974798 2538 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:33:19.974965 kubelet[2538]: I1213 14:33:19.974954 2538 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:33:19.975058 kubelet[2538]: I1213 14:33:19.975039 2538 kubelet.go:2321] "Starting kubelet main sync loop" Dec 13 14:33:19.975189 kubelet[2538]: E1213 14:33:19.975169 2538 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:33:19.982171 kubelet[2538]: I1213 14:33:19.982132 2538 volume_manager.go:289] "Starting Kubelet Volume Manager" Dec 13 14:33:19.982437 kubelet[2538]: I1213 14:33:19.982399 2538 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Dec 13 14:33:19.982640 kubelet[2538]: I1213 14:33:19.982623 2538 reconciler.go:26] "Reconciler: start to sync state" Dec 13 14:33:19.989811 kubelet[2538]: I1213 14:33:19.989251 2538 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:33:19.996460 kubelet[2538]: I1213 14:33:19.991664 2538 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:33:19.996460 kubelet[2538]: I1213 14:33:19.994374 2538 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:33:20.078455 kubelet[2538]: E1213 14:33:20.075306 2538 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Dec 13 14:33:20.108500 kubelet[2538]: I1213 14:33:20.102412 2538 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:33:20.108500 kubelet[2538]: I1213 14:33:20.102430 2538 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:33:20.108500 kubelet[2538]: I1213 14:33:20.102451 2538 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:33:20.108500 kubelet[2538]: I1213 14:33:20.102635 2538 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:33:20.108500 kubelet[2538]: I1213 14:33:20.102648 2538 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:33:20.108500 kubelet[2538]: I1213 14:33:20.102673 2538 policy_none.go:49] "None policy: Start" Dec 13 14:33:20.108500 kubelet[2538]: I1213 14:33:20.103516 2538 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:33:20.108500 kubelet[2538]: I1213 14:33:20.103536 2538 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:33:20.108500 kubelet[2538]: I1213 14:33:20.103810 2538 state_mem.go:75] "Updated machine memory state" Dec 13 14:33:20.113244 kubelet[2538]: I1213 14:33:20.112624 2538 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:33:20.117873 kubelet[2538]: I1213 14:33:20.117428 2538 eviction_manager.go:189] "Eviction manager: starting control loop" Dec 13 14:33:20.117873 kubelet[2538]: I1213 14:33:20.117450 2538 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 14:33:20.123099 kubelet[2538]: I1213 14:33:20.123073 2538 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:33:20.230323 kubelet[2538]: I1213 14:33:20.230223 2538 kubelet_node_status.go:72] "Attempting to register node" node="ip-172-31-27-196" Dec 13 14:33:20.246197 kubelet[2538]: I1213 14:33:20.246155 2538 kubelet_node_status.go:111] "Node was previously registered" node="ip-172-31-27-196" Dec 13 14:33:20.246601 kubelet[2538]: I1213 14:33:20.246342 2538 kubelet_node_status.go:75] "Successfully registered node" node="ip-172-31-27-196" Dec 13 14:33:20.297337 kubelet[2538]: I1213 14:33:20.297251 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3558864ffd71ee55a0ecfc680d667f3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-27-196\" (UID: \"a3558864ffd71ee55a0ecfc680d667f3\") " pod="kube-system/kube-apiserver-ip-172-31-27-196" Dec 13 14:33:20.297523 kubelet[2538]: I1213 14:33:20.297367 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/737cab1ed0fb1c664e76023f05fe37c4-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-27-196\" (UID: \"737cab1ed0fb1c664e76023f05fe37c4\") " pod="kube-system/kube-controller-manager-ip-172-31-27-196" Dec 13 14:33:20.297523 kubelet[2538]: I1213 14:33:20.297401 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/737cab1ed0fb1c664e76023f05fe37c4-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-27-196\" (UID: \"737cab1ed0fb1c664e76023f05fe37c4\") " pod="kube-system/kube-controller-manager-ip-172-31-27-196" Dec 13 14:33:20.297523 kubelet[2538]: I1213 14:33:20.297455 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3558864ffd71ee55a0ecfc680d667f3-k8s-certs\") pod \"kube-apiserver-ip-172-31-27-196\" (UID: \"a3558864ffd71ee55a0ecfc680d667f3\") " pod="kube-system/kube-apiserver-ip-172-31-27-196" Dec 13 14:33:20.297523 kubelet[2538]: I1213 14:33:20.297477 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/737cab1ed0fb1c664e76023f05fe37c4-ca-certs\") pod \"kube-controller-manager-ip-172-31-27-196\" (UID: \"737cab1ed0fb1c664e76023f05fe37c4\") " pod="kube-system/kube-controller-manager-ip-172-31-27-196" Dec 13 14:33:20.297723 kubelet[2538]: I1213 14:33:20.297534 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/737cab1ed0fb1c664e76023f05fe37c4-k8s-certs\") pod \"kube-controller-manager-ip-172-31-27-196\" (UID: \"737cab1ed0fb1c664e76023f05fe37c4\") " pod="kube-system/kube-controller-manager-ip-172-31-27-196" Dec 13 14:33:20.297723 kubelet[2538]: I1213 14:33:20.297603 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/737cab1ed0fb1c664e76023f05fe37c4-kubeconfig\") pod \"kube-controller-manager-ip-172-31-27-196\" (UID: \"737cab1ed0fb1c664e76023f05fe37c4\") " pod="kube-system/kube-controller-manager-ip-172-31-27-196" Dec 13 14:33:20.297723 kubelet[2538]: I1213 14:33:20.297629 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b6fe5b9169d0647dc5e5525b1a577a32-kubeconfig\") pod \"kube-scheduler-ip-172-31-27-196\" (UID: \"b6fe5b9169d0647dc5e5525b1a577a32\") " pod="kube-system/kube-scheduler-ip-172-31-27-196" Dec 13 14:33:20.297723 kubelet[2538]: I1213 14:33:20.297694 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3558864ffd71ee55a0ecfc680d667f3-ca-certs\") pod \"kube-apiserver-ip-172-31-27-196\" (UID: \"a3558864ffd71ee55a0ecfc680d667f3\") " pod="kube-system/kube-apiserver-ip-172-31-27-196" Dec 13 14:33:20.298173 kubelet[2538]: E1213 14:33:20.298012 2538 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-27-196\" already exists" pod="kube-system/kube-scheduler-ip-172-31-27-196" Dec 13 14:33:20.892291 kubelet[2538]: I1213 14:33:20.892243 2538 apiserver.go:52] "Watching apiserver" Dec 13 14:33:20.982862 kubelet[2538]: I1213 14:33:20.982824 2538 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Dec 13 14:33:21.094097 kubelet[2538]: I1213 14:33:21.093952 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-27-196" podStartSLOduration=1.093935611 podStartE2EDuration="1.093935611s" podCreationTimestamp="2024-12-13 14:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:33:21.092893628 +0000 UTC m=+1.426906554" watchObservedRunningTime="2024-12-13 14:33:21.093935611 +0000 UTC m=+1.427948586" Dec 13 14:33:21.112196 kubelet[2538]: I1213 14:33:21.112136 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-27-196" podStartSLOduration=1.112110962 podStartE2EDuration="1.112110962s" podCreationTimestamp="2024-12-13 14:33:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:33:21.109768937 +0000 UTC m=+1.443781866" watchObservedRunningTime="2024-12-13 14:33:21.112110962 +0000 UTC m=+1.446123886" Dec 13 14:33:21.137923 kubelet[2538]: I1213 14:33:21.137847 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-27-196" podStartSLOduration=4.137829588 podStartE2EDuration="4.137829588s" podCreationTimestamp="2024-12-13 14:33:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:33:21.124441681 +0000 UTC m=+1.458454606" watchObservedRunningTime="2024-12-13 14:33:21.137829588 +0000 UTC m=+1.471842513" Dec 13 14:33:21.169081 sudo[2550]: pam_unix(sudo:session): session closed for user root Dec 13 14:33:22.881432 update_engine[1706]: I1213 14:33:22.880335 1706 update_attempter.cc:509] Updating boot flags... Dec 13 14:33:23.298006 kubelet[2538]: I1213 14:33:23.297702 2538 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:33:23.299131 env[1719]: time="2024-12-13T14:33:23.299019430Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:33:23.300076 kubelet[2538]: I1213 14:33:23.299769 2538 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:33:23.785024 sudo[1939]: pam_unix(sudo:session): session closed for user root Dec 13 14:33:23.809251 sshd[1936]: pam_unix(sshd:session): session closed for user core Dec 13 14:33:23.812870 systemd-logind[1705]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:33:23.814274 systemd[1]: sshd@4-172.31.27.196:22-139.178.89.65:60452.service: Deactivated successfully. Dec 13 14:33:23.815062 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:33:23.815202 systemd[1]: session-5.scope: Consumed 4.820s CPU time. Dec 13 14:33:23.816513 systemd-logind[1705]: Removed session 5. Dec 13 14:33:23.978101 systemd[1]: Created slice kubepods-besteffort-pod1d1ba18c_9399_4dd8_99f4_84832e0e6fe4.slice. Dec 13 14:33:23.990403 systemd[1]: Created slice kubepods-burstable-pod7a7eaa7e_ea53_4522_94db_4337aa4eb5ca.slice. Dec 13 14:33:24.044457 kubelet[2538]: I1213 14:33:24.044336 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-bpf-maps\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.044457 kubelet[2538]: I1213 14:33:24.044383 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-hubble-tls\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.044457 kubelet[2538]: I1213 14:33:24.044411 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cilium-run\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.044457 kubelet[2538]: I1213 14:33:24.044435 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-xtables-lock\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.044457 kubelet[2538]: I1213 14:33:24.044459 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-etc-cni-netd\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.044795 kubelet[2538]: I1213 14:33:24.044479 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-clustermesh-secrets\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.044795 kubelet[2538]: I1213 14:33:24.044501 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-host-proc-sys-kernel\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.044795 kubelet[2538]: I1213 14:33:24.044523 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1d1ba18c-9399-4dd8-99f4-84832e0e6fe4-xtables-lock\") pod \"kube-proxy-hc6hl\" (UID: \"1d1ba18c-9399-4dd8-99f4-84832e0e6fe4\") " pod="kube-system/kube-proxy-hc6hl" Dec 13 14:33:24.044795 kubelet[2538]: I1213 14:33:24.044542 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1d1ba18c-9399-4dd8-99f4-84832e0e6fe4-lib-modules\") pod \"kube-proxy-hc6hl\" (UID: \"1d1ba18c-9399-4dd8-99f4-84832e0e6fe4\") " pod="kube-system/kube-proxy-hc6hl" Dec 13 14:33:24.044795 kubelet[2538]: I1213 14:33:24.044565 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4skfl\" (UniqueName: \"kubernetes.io/projected/1d1ba18c-9399-4dd8-99f4-84832e0e6fe4-kube-api-access-4skfl\") pod \"kube-proxy-hc6hl\" (UID: \"1d1ba18c-9399-4dd8-99f4-84832e0e6fe4\") " pod="kube-system/kube-proxy-hc6hl" Dec 13 14:33:24.044999 kubelet[2538]: I1213 14:33:24.044590 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cni-path\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.044999 kubelet[2538]: I1213 14:33:24.044609 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-lib-modules\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.044999 kubelet[2538]: I1213 14:33:24.044633 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-host-proc-sys-net\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.044999 kubelet[2538]: I1213 14:33:24.044668 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-hostproc\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.044999 kubelet[2538]: I1213 14:33:24.044695 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1d1ba18c-9399-4dd8-99f4-84832e0e6fe4-kube-proxy\") pod \"kube-proxy-hc6hl\" (UID: \"1d1ba18c-9399-4dd8-99f4-84832e0e6fe4\") " pod="kube-system/kube-proxy-hc6hl" Dec 13 14:33:24.044999 kubelet[2538]: I1213 14:33:24.044718 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cilium-config-path\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.045244 kubelet[2538]: I1213 14:33:24.044742 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cilium-cgroup\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.045244 kubelet[2538]: I1213 14:33:24.044768 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f97w4\" (UniqueName: \"kubernetes.io/projected/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-kube-api-access-f97w4\") pod \"cilium-2mt46\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " pod="kube-system/cilium-2mt46" Dec 13 14:33:24.147263 systemd[1]: Created slice kubepods-besteffort-podc30d8fd2_a95e_4685_9034_0d7dad787d2d.slice. Dec 13 14:33:24.149345 kubelet[2538]: I1213 14:33:24.149312 2538 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Dec 13 14:33:24.246526 kubelet[2538]: I1213 14:33:24.246492 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c30d8fd2-a95e-4685-9034-0d7dad787d2d-cilium-config-path\") pod \"cilium-operator-5d85765b45-vqx2p\" (UID: \"c30d8fd2-a95e-4685-9034-0d7dad787d2d\") " pod="kube-system/cilium-operator-5d85765b45-vqx2p" Dec 13 14:33:24.246746 kubelet[2538]: I1213 14:33:24.246732 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vwdq\" (UniqueName: \"kubernetes.io/projected/c30d8fd2-a95e-4685-9034-0d7dad787d2d-kube-api-access-5vwdq\") pod \"cilium-operator-5d85765b45-vqx2p\" (UID: \"c30d8fd2-a95e-4685-9034-0d7dad787d2d\") " pod="kube-system/cilium-operator-5d85765b45-vqx2p" Dec 13 14:33:24.290372 env[1719]: time="2024-12-13T14:33:24.289804824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hc6hl,Uid:1d1ba18c-9399-4dd8-99f4-84832e0e6fe4,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:24.296459 env[1719]: time="2024-12-13T14:33:24.295813605Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mt46,Uid:7a7eaa7e-ea53-4522-94db-4337aa4eb5ca,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:24.326359 env[1719]: time="2024-12-13T14:33:24.326168566Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:24.326359 env[1719]: time="2024-12-13T14:33:24.326286930Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:24.326359 env[1719]: time="2024-12-13T14:33:24.326324015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:24.326848 env[1719]: time="2024-12-13T14:33:24.326554031Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c2bb2665574cd567cf335a5847ebcad08f819e3cc5254ad19b22e92c5ff4f2b pid=2799 runtime=io.containerd.runc.v2 Dec 13 14:33:24.342405 systemd[1]: Started cri-containerd-5c2bb2665574cd567cf335a5847ebcad08f819e3cc5254ad19b22e92c5ff4f2b.scope. Dec 13 14:33:24.365317 env[1719]: time="2024-12-13T14:33:24.365216588Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:24.365574 env[1719]: time="2024-12-13T14:33:24.365535557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:24.365729 env[1719]: time="2024-12-13T14:33:24.365702849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:24.366101 env[1719]: time="2024-12-13T14:33:24.366065081Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34 pid=2827 runtime=io.containerd.runc.v2 Dec 13 14:33:24.385893 systemd[1]: Started cri-containerd-29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34.scope. Dec 13 14:33:24.413847 env[1719]: time="2024-12-13T14:33:24.413798836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-hc6hl,Uid:1d1ba18c-9399-4dd8-99f4-84832e0e6fe4,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c2bb2665574cd567cf335a5847ebcad08f819e3cc5254ad19b22e92c5ff4f2b\"" Dec 13 14:33:24.422730 env[1719]: time="2024-12-13T14:33:24.422689311Z" level=info msg="CreateContainer within sandbox \"5c2bb2665574cd567cf335a5847ebcad08f819e3cc5254ad19b22e92c5ff4f2b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:33:24.439560 env[1719]: time="2024-12-13T14:33:24.439508641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mt46,Uid:7a7eaa7e-ea53-4522-94db-4337aa4eb5ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\"" Dec 13 14:33:24.443259 env[1719]: time="2024-12-13T14:33:24.443142824Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 14:33:24.464633 env[1719]: time="2024-12-13T14:33:24.464580896Z" level=info msg="CreateContainer within sandbox \"5c2bb2665574cd567cf335a5847ebcad08f819e3cc5254ad19b22e92c5ff4f2b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"68086fad823a6c36551439605a44ecff1ad56fabb61157192e375fd42fa0500d\"" Dec 13 14:33:24.466681 env[1719]: time="2024-12-13T14:33:24.466650399Z" level=info msg="StartContainer for \"68086fad823a6c36551439605a44ecff1ad56fabb61157192e375fd42fa0500d\"" Dec 13 14:33:24.469145 env[1719]: time="2024-12-13T14:33:24.469035535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vqx2p,Uid:c30d8fd2-a95e-4685-9034-0d7dad787d2d,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:24.487567 systemd[1]: Started cri-containerd-68086fad823a6c36551439605a44ecff1ad56fabb61157192e375fd42fa0500d.scope. Dec 13 14:33:24.509527 env[1719]: time="2024-12-13T14:33:24.509440753Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:24.509789 env[1719]: time="2024-12-13T14:33:24.509496906Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:24.509916 env[1719]: time="2024-12-13T14:33:24.509774423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:24.510295 env[1719]: time="2024-12-13T14:33:24.510219503Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41 pid=2906 runtime=io.containerd.runc.v2 Dec 13 14:33:24.525714 systemd[1]: Started cri-containerd-879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41.scope. Dec 13 14:33:24.565854 env[1719]: time="2024-12-13T14:33:24.565741548Z" level=info msg="StartContainer for \"68086fad823a6c36551439605a44ecff1ad56fabb61157192e375fd42fa0500d\" returns successfully" Dec 13 14:33:24.589114 env[1719]: time="2024-12-13T14:33:24.589065307Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-vqx2p,Uid:c30d8fd2-a95e-4685-9034-0d7dad787d2d,Namespace:kube-system,Attempt:0,} returns sandbox id \"879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41\"" Dec 13 14:33:25.829616 kubelet[2538]: I1213 14:33:25.829553 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-hc6hl" podStartSLOduration=2.829534291 podStartE2EDuration="2.829534291s" podCreationTimestamp="2024-12-13 14:33:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:33:25.087087217 +0000 UTC m=+5.421100144" watchObservedRunningTime="2024-12-13 14:33:25.829534291 +0000 UTC m=+6.163547217" Dec 13 14:33:30.864017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2650587259.mount: Deactivated successfully. Dec 13 14:33:34.038848 env[1719]: time="2024-12-13T14:33:34.038797333Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:34.049565 env[1719]: time="2024-12-13T14:33:34.049516965Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:34.051967 env[1719]: time="2024-12-13T14:33:34.051921944Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:34.052637 env[1719]: time="2024-12-13T14:33:34.052597048Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 14:33:34.054869 env[1719]: time="2024-12-13T14:33:34.054828796Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 14:33:34.055827 env[1719]: time="2024-12-13T14:33:34.055795268Z" level=info msg="CreateContainer within sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:33:34.083446 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount21016207.mount: Deactivated successfully. Dec 13 14:33:34.092492 env[1719]: time="2024-12-13T14:33:34.092444743Z" level=info msg="CreateContainer within sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83\"" Dec 13 14:33:34.094679 env[1719]: time="2024-12-13T14:33:34.094639203Z" level=info msg="StartContainer for \"7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83\"" Dec 13 14:33:34.124565 systemd[1]: Started cri-containerd-7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83.scope. Dec 13 14:33:34.173085 env[1719]: time="2024-12-13T14:33:34.173027690Z" level=info msg="StartContainer for \"7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83\" returns successfully" Dec 13 14:33:34.185531 systemd[1]: cri-containerd-7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83.scope: Deactivated successfully. Dec 13 14:33:34.371298 env[1719]: time="2024-12-13T14:33:34.371234823Z" level=info msg="shim disconnected" id=7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83 Dec 13 14:33:34.371739 env[1719]: time="2024-12-13T14:33:34.371710608Z" level=warning msg="cleaning up after shim disconnected" id=7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83 namespace=k8s.io Dec 13 14:33:34.371878 env[1719]: time="2024-12-13T14:33:34.371859605Z" level=info msg="cleaning up dead shim" Dec 13 14:33:34.385498 env[1719]: time="2024-12-13T14:33:34.385442635Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3128 runtime=io.containerd.runc.v2\n" Dec 13 14:33:35.073079 systemd[1]: run-containerd-runc-k8s.io-7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83-runc.lQxnJL.mount: Deactivated successfully. Dec 13 14:33:35.073345 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83-rootfs.mount: Deactivated successfully. Dec 13 14:33:35.110704 env[1719]: time="2024-12-13T14:33:35.110657127Z" level=info msg="CreateContainer within sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:33:35.147773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4144481387.mount: Deactivated successfully. Dec 13 14:33:35.159910 env[1719]: time="2024-12-13T14:33:35.159855811Z" level=info msg="CreateContainer within sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb\"" Dec 13 14:33:35.162697 env[1719]: time="2024-12-13T14:33:35.160733134Z" level=info msg="StartContainer for \"d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb\"" Dec 13 14:33:35.183860 systemd[1]: Started cri-containerd-d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb.scope. Dec 13 14:33:35.225300 env[1719]: time="2024-12-13T14:33:35.223233179Z" level=info msg="StartContainer for \"d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb\" returns successfully" Dec 13 14:33:35.239373 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:33:35.240702 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:33:35.241074 systemd[1]: Stopping systemd-sysctl.service... Dec 13 14:33:35.246091 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:33:35.250859 systemd[1]: cri-containerd-d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb.scope: Deactivated successfully. Dec 13 14:33:35.286670 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:33:35.320624 env[1719]: time="2024-12-13T14:33:35.320518833Z" level=info msg="shim disconnected" id=d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb Dec 13 14:33:35.320624 env[1719]: time="2024-12-13T14:33:35.320616362Z" level=warning msg="cleaning up after shim disconnected" id=d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb namespace=k8s.io Dec 13 14:33:35.321299 env[1719]: time="2024-12-13T14:33:35.320632113Z" level=info msg="cleaning up dead shim" Dec 13 14:33:35.334451 env[1719]: time="2024-12-13T14:33:35.333364196Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3190 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:33:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Dec 13 14:33:36.075535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb-rootfs.mount: Deactivated successfully. Dec 13 14:33:36.115587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2617657771.mount: Deactivated successfully. Dec 13 14:33:36.149753 env[1719]: time="2024-12-13T14:33:36.144459345Z" level=info msg="CreateContainer within sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:33:36.206174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4099462683.mount: Deactivated successfully. Dec 13 14:33:36.216522 env[1719]: time="2024-12-13T14:33:36.216465437Z" level=info msg="CreateContainer within sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615\"" Dec 13 14:33:36.218754 env[1719]: time="2024-12-13T14:33:36.217637689Z" level=info msg="StartContainer for \"3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615\"" Dec 13 14:33:36.249105 systemd[1]: Started cri-containerd-3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615.scope. Dec 13 14:33:36.314259 env[1719]: time="2024-12-13T14:33:36.314211030Z" level=info msg="StartContainer for \"3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615\" returns successfully" Dec 13 14:33:36.328891 systemd[1]: cri-containerd-3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615.scope: Deactivated successfully. Dec 13 14:33:36.390966 env[1719]: time="2024-12-13T14:33:36.390916618Z" level=info msg="shim disconnected" id=3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615 Dec 13 14:33:36.391252 env[1719]: time="2024-12-13T14:33:36.391234285Z" level=warning msg="cleaning up after shim disconnected" id=3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615 namespace=k8s.io Dec 13 14:33:36.391339 env[1719]: time="2024-12-13T14:33:36.391327262Z" level=info msg="cleaning up dead shim" Dec 13 14:33:36.412469 env[1719]: time="2024-12-13T14:33:36.412416796Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3246 runtime=io.containerd.runc.v2\n" Dec 13 14:33:37.073324 env[1719]: time="2024-12-13T14:33:37.072489459Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:37.075257 env[1719]: time="2024-12-13T14:33:37.075216064Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:37.077123 env[1719]: time="2024-12-13T14:33:37.077090498Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:33:37.077551 env[1719]: time="2024-12-13T14:33:37.077518235Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 14:33:37.081008 env[1719]: time="2024-12-13T14:33:37.080973922Z" level=info msg="CreateContainer within sandbox \"879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 14:33:37.096363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3013566308.mount: Deactivated successfully. Dec 13 14:33:37.106933 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3153563331.mount: Deactivated successfully. Dec 13 14:33:37.109646 env[1719]: time="2024-12-13T14:33:37.109594430Z" level=info msg="CreateContainer within sandbox \"879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\"" Dec 13 14:33:37.111971 env[1719]: time="2024-12-13T14:33:37.111930143Z" level=info msg="StartContainer for \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\"" Dec 13 14:33:37.133636 systemd[1]: Started cri-containerd-299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf.scope. Dec 13 14:33:37.150895 env[1719]: time="2024-12-13T14:33:37.150831607Z" level=info msg="CreateContainer within sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:33:37.167431 env[1719]: time="2024-12-13T14:33:37.167348426Z" level=info msg="CreateContainer within sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e\"" Dec 13 14:33:37.170524 env[1719]: time="2024-12-13T14:33:37.170420841Z" level=info msg="StartContainer for \"14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e\"" Dec 13 14:33:37.199241 systemd[1]: Started cri-containerd-14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e.scope. Dec 13 14:33:37.215052 env[1719]: time="2024-12-13T14:33:37.214998584Z" level=info msg="StartContainer for \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\" returns successfully" Dec 13 14:33:37.250720 systemd[1]: cri-containerd-14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e.scope: Deactivated successfully. Dec 13 14:33:37.252867 env[1719]: time="2024-12-13T14:33:37.252804033Z" level=info msg="StartContainer for \"14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e\" returns successfully" Dec 13 14:33:37.362631 env[1719]: time="2024-12-13T14:33:37.362578657Z" level=info msg="shim disconnected" id=14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e Dec 13 14:33:37.362960 env[1719]: time="2024-12-13T14:33:37.362935542Z" level=warning msg="cleaning up after shim disconnected" id=14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e namespace=k8s.io Dec 13 14:33:37.363141 env[1719]: time="2024-12-13T14:33:37.363049183Z" level=info msg="cleaning up dead shim" Dec 13 14:33:37.377450 env[1719]: time="2024-12-13T14:33:37.377400643Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:33:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3340 runtime=io.containerd.runc.v2\n" Dec 13 14:33:38.150038 env[1719]: time="2024-12-13T14:33:38.149990459Z" level=info msg="CreateContainer within sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:33:38.170493 env[1719]: time="2024-12-13T14:33:38.170443186Z" level=info msg="CreateContainer within sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\"" Dec 13 14:33:38.171939 env[1719]: time="2024-12-13T14:33:38.171884109Z" level=info msg="StartContainer for \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\"" Dec 13 14:33:38.199902 systemd[1]: Started cri-containerd-aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc.scope. Dec 13 14:33:38.344630 env[1719]: time="2024-12-13T14:33:38.344591237Z" level=info msg="StartContainer for \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\" returns successfully" Dec 13 14:33:38.364983 kubelet[2538]: I1213 14:33:38.364225 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-vqx2p" podStartSLOduration=1.876250872 podStartE2EDuration="14.364200643s" podCreationTimestamp="2024-12-13 14:33:24 +0000 UTC" firstStartedPulling="2024-12-13 14:33:24.590486473 +0000 UTC m=+4.924499385" lastFinishedPulling="2024-12-13 14:33:37.078436242 +0000 UTC m=+17.412449156" observedRunningTime="2024-12-13 14:33:38.24519128 +0000 UTC m=+18.579204217" watchObservedRunningTime="2024-12-13 14:33:38.364200643 +0000 UTC m=+18.698213567" Dec 13 14:33:38.657347 kubelet[2538]: I1213 14:33:38.657314 2538 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Dec 13 14:33:38.712674 systemd[1]: Created slice kubepods-burstable-podf5f3b9f2_c258_4e61_8448_4a3321c2a694.slice. Dec 13 14:33:38.721926 systemd[1]: Created slice kubepods-burstable-podb4126957_fc80_4ca3_9602_c56dca4157ca.slice. Dec 13 14:33:38.791836 kubelet[2538]: I1213 14:33:38.791792 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f5f3b9f2-c258-4e61-8448-4a3321c2a694-config-volume\") pod \"coredns-6f6b679f8f-fbszb\" (UID: \"f5f3b9f2-c258-4e61-8448-4a3321c2a694\") " pod="kube-system/coredns-6f6b679f8f-fbszb" Dec 13 14:33:38.792067 kubelet[2538]: I1213 14:33:38.792045 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bz5vp\" (UniqueName: \"kubernetes.io/projected/f5f3b9f2-c258-4e61-8448-4a3321c2a694-kube-api-access-bz5vp\") pod \"coredns-6f6b679f8f-fbszb\" (UID: \"f5f3b9f2-c258-4e61-8448-4a3321c2a694\") " pod="kube-system/coredns-6f6b679f8f-fbszb" Dec 13 14:33:38.792159 kubelet[2538]: I1213 14:33:38.792146 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4126957-fc80-4ca3-9602-c56dca4157ca-config-volume\") pod \"coredns-6f6b679f8f-q8mr5\" (UID: \"b4126957-fc80-4ca3-9602-c56dca4157ca\") " pod="kube-system/coredns-6f6b679f8f-q8mr5" Dec 13 14:33:38.792254 kubelet[2538]: I1213 14:33:38.792240 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zx97\" (UniqueName: \"kubernetes.io/projected/b4126957-fc80-4ca3-9602-c56dca4157ca-kube-api-access-7zx97\") pod \"coredns-6f6b679f8f-q8mr5\" (UID: \"b4126957-fc80-4ca3-9602-c56dca4157ca\") " pod="kube-system/coredns-6f6b679f8f-q8mr5" Dec 13 14:33:39.031715 env[1719]: time="2024-12-13T14:33:39.031584043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q8mr5,Uid:b4126957-fc80-4ca3-9602-c56dca4157ca,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:39.034393 env[1719]: time="2024-12-13T14:33:39.034349484Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fbszb,Uid:f5f3b9f2-c258-4e61-8448-4a3321c2a694,Namespace:kube-system,Attempt:0,}" Dec 13 14:33:41.507069 (udev-worker)[3467]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:33:41.509185 systemd-networkd[1441]: cilium_host: Link UP Dec 13 14:33:41.509418 systemd-networkd[1441]: cilium_net: Link UP Dec 13 14:33:41.511955 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 14:33:41.513585 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 14:33:41.511252 systemd-networkd[1441]: cilium_net: Gained carrier Dec 13 14:33:41.512065 systemd-networkd[1441]: cilium_host: Gained carrier Dec 13 14:33:41.513373 (udev-worker)[3507]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:33:41.604706 systemd-networkd[1441]: cilium_host: Gained IPv6LL Dec 13 14:33:41.992563 systemd-networkd[1441]: cilium_vxlan: Link UP Dec 13 14:33:41.992577 systemd-networkd[1441]: cilium_vxlan: Gained carrier Dec 13 14:33:42.389485 systemd-networkd[1441]: cilium_net: Gained IPv6LL Dec 13 14:33:43.348443 systemd-networkd[1441]: cilium_vxlan: Gained IPv6LL Dec 13 14:33:44.175295 kernel: NET: Registered PF_ALG protocol family Dec 13 14:33:45.303258 (udev-worker)[3526]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:33:45.314523 systemd-networkd[1441]: lxc_health: Link UP Dec 13 14:33:45.317643 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:33:45.316682 systemd-networkd[1441]: lxc_health: Gained carrier Dec 13 14:33:45.621002 systemd-networkd[1441]: lxca0ce4b826a95: Link UP Dec 13 14:33:45.630424 kernel: eth0: renamed from tmp7e61c Dec 13 14:33:45.639442 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca0ce4b826a95: link becomes ready Dec 13 14:33:45.639203 systemd-networkd[1441]: lxca0ce4b826a95: Gained carrier Dec 13 14:33:45.659227 systemd-networkd[1441]: lxc2cb1e7a7bf48: Link UP Dec 13 14:33:45.670533 kernel: eth0: renamed from tmp52c06 Dec 13 14:33:45.673345 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2cb1e7a7bf48: link becomes ready Dec 13 14:33:45.673430 systemd-networkd[1441]: lxc2cb1e7a7bf48: Gained carrier Dec 13 14:33:45.675409 (udev-worker)[3527]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:33:46.345440 kubelet[2538]: I1213 14:33:46.345360 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2mt46" podStartSLOduration=13.732192402 podStartE2EDuration="23.345335106s" podCreationTimestamp="2024-12-13 14:33:23 +0000 UTC" firstStartedPulling="2024-12-13 14:33:24.440944071 +0000 UTC m=+4.774956974" lastFinishedPulling="2024-12-13 14:33:34.054086685 +0000 UTC m=+14.388099678" observedRunningTime="2024-12-13 14:33:39.170901198 +0000 UTC m=+19.504914140" watchObservedRunningTime="2024-12-13 14:33:46.345335106 +0000 UTC m=+26.679348037" Dec 13 14:33:46.785434 systemd-networkd[1441]: lxc_health: Gained IPv6LL Dec 13 14:33:47.124420 systemd-networkd[1441]: lxc2cb1e7a7bf48: Gained IPv6LL Dec 13 14:33:47.380512 systemd-networkd[1441]: lxca0ce4b826a95: Gained IPv6LL Dec 13 14:33:50.993341 env[1719]: time="2024-12-13T14:33:50.987426747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:50.993341 env[1719]: time="2024-12-13T14:33:50.987485312Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:50.993341 env[1719]: time="2024-12-13T14:33:50.987501335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:50.993341 env[1719]: time="2024-12-13T14:33:50.987686622Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52c068937ac88823e363b91e495f7ef449961a350183857901eb9f79c7826656 pid=3886 runtime=io.containerd.runc.v2 Dec 13 14:33:51.006056 env[1719]: time="2024-12-13T14:33:51.005967069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:33:51.009999 env[1719]: time="2024-12-13T14:33:51.006022074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:33:51.009999 env[1719]: time="2024-12-13T14:33:51.008346174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:33:51.013554 env[1719]: time="2024-12-13T14:33:51.013491724Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7e61caf61b17b49e3f599a2a40c467c8a3f9713b200d5857471efd7e75a05ed4 pid=3900 runtime=io.containerd.runc.v2 Dec 13 14:33:51.038294 systemd[1]: run-containerd-runc-k8s.io-52c068937ac88823e363b91e495f7ef449961a350183857901eb9f79c7826656-runc.m5BnaY.mount: Deactivated successfully. Dec 13 14:33:51.055146 systemd[1]: Started cri-containerd-52c068937ac88823e363b91e495f7ef449961a350183857901eb9f79c7826656.scope. Dec 13 14:33:51.069279 systemd[1]: Started cri-containerd-7e61caf61b17b49e3f599a2a40c467c8a3f9713b200d5857471efd7e75a05ed4.scope. Dec 13 14:33:51.179078 env[1719]: time="2024-12-13T14:33:51.179031714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-q8mr5,Uid:b4126957-fc80-4ca3-9602-c56dca4157ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"52c068937ac88823e363b91e495f7ef449961a350183857901eb9f79c7826656\"" Dec 13 14:33:51.204786 env[1719]: time="2024-12-13T14:33:51.203162340Z" level=info msg="CreateContainer within sandbox \"52c068937ac88823e363b91e495f7ef449961a350183857901eb9f79c7826656\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:33:51.250003 env[1719]: time="2024-12-13T14:33:51.246662337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-fbszb,Uid:f5f3b9f2-c258-4e61-8448-4a3321c2a694,Namespace:kube-system,Attempt:0,} returns sandbox id \"7e61caf61b17b49e3f599a2a40c467c8a3f9713b200d5857471efd7e75a05ed4\"" Dec 13 14:33:51.257535 env[1719]: time="2024-12-13T14:33:51.257445936Z" level=info msg="CreateContainer within sandbox \"7e61caf61b17b49e3f599a2a40c467c8a3f9713b200d5857471efd7e75a05ed4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:33:51.285410 env[1719]: time="2024-12-13T14:33:51.285343155Z" level=info msg="CreateContainer within sandbox \"52c068937ac88823e363b91e495f7ef449961a350183857901eb9f79c7826656\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"812a19b2edbeaf66577afae4a8a67ad127c9845dedf64d08ea0d6ca5d6924388\"" Dec 13 14:33:51.343950 env[1719]: time="2024-12-13T14:33:51.286393769Z" level=info msg="StartContainer for \"812a19b2edbeaf66577afae4a8a67ad127c9845dedf64d08ea0d6ca5d6924388\"" Dec 13 14:33:51.343950 env[1719]: time="2024-12-13T14:33:51.296587890Z" level=info msg="CreateContainer within sandbox \"7e61caf61b17b49e3f599a2a40c467c8a3f9713b200d5857471efd7e75a05ed4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"50395167ebb4c543f29c17bb78549e65ad64953619ab42be374f94ac9d3e6844\"" Dec 13 14:33:51.343950 env[1719]: time="2024-12-13T14:33:51.298984228Z" level=info msg="StartContainer for \"50395167ebb4c543f29c17bb78549e65ad64953619ab42be374f94ac9d3e6844\"" Dec 13 14:33:51.315140 systemd[1]: Started cri-containerd-812a19b2edbeaf66577afae4a8a67ad127c9845dedf64d08ea0d6ca5d6924388.scope. Dec 13 14:33:51.340550 systemd[1]: Started cri-containerd-50395167ebb4c543f29c17bb78549e65ad64953619ab42be374f94ac9d3e6844.scope. Dec 13 14:33:51.384848 env[1719]: time="2024-12-13T14:33:51.384764520Z" level=info msg="StartContainer for \"812a19b2edbeaf66577afae4a8a67ad127c9845dedf64d08ea0d6ca5d6924388\" returns successfully" Dec 13 14:33:51.391466 env[1719]: time="2024-12-13T14:33:51.391404432Z" level=info msg="StartContainer for \"50395167ebb4c543f29c17bb78549e65ad64953619ab42be374f94ac9d3e6844\" returns successfully" Dec 13 14:33:51.996758 systemd[1]: run-containerd-runc-k8s.io-7e61caf61b17b49e3f599a2a40c467c8a3f9713b200d5857471efd7e75a05ed4-runc.ZCdxbM.mount: Deactivated successfully. Dec 13 14:33:52.234902 kubelet[2538]: I1213 14:33:52.234829 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-q8mr5" podStartSLOduration=28.234746793 podStartE2EDuration="28.234746793s" podCreationTimestamp="2024-12-13 14:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:33:52.234116657 +0000 UTC m=+32.568129587" watchObservedRunningTime="2024-12-13 14:33:52.234746793 +0000 UTC m=+32.568759719" Dec 13 14:33:52.254190 kubelet[2538]: I1213 14:33:52.253984 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-fbszb" podStartSLOduration=28.253967396 podStartE2EDuration="28.253967396s" podCreationTimestamp="2024-12-13 14:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:33:52.251800481 +0000 UTC m=+32.585813419" watchObservedRunningTime="2024-12-13 14:33:52.253967396 +0000 UTC m=+32.587980321" Dec 13 14:34:09.244669 systemd[1]: Started sshd@5-172.31.27.196:22-139.178.89.65:50650.service. Dec 13 14:34:09.492568 sshd[4048]: Accepted publickey for core from 139.178.89.65 port 50650 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:09.501918 sshd[4048]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:09.514353 systemd[1]: Started session-6.scope. Dec 13 14:34:09.514903 systemd-logind[1705]: New session 6 of user core. Dec 13 14:34:09.846568 sshd[4048]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:09.850245 systemd[1]: sshd@5-172.31.27.196:22-139.178.89.65:50650.service: Deactivated successfully. Dec 13 14:34:09.851245 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:34:09.851914 systemd-logind[1705]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:34:09.852793 systemd-logind[1705]: Removed session 6. Dec 13 14:34:14.878751 systemd[1]: Started sshd@6-172.31.27.196:22-139.178.89.65:50652.service. Dec 13 14:34:15.070711 sshd[4061]: Accepted publickey for core from 139.178.89.65 port 50652 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:15.072291 sshd[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:15.082158 systemd[1]: Started session-7.scope. Dec 13 14:34:15.083700 systemd-logind[1705]: New session 7 of user core. Dec 13 14:34:15.345852 sshd[4061]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:15.350520 systemd[1]: sshd@6-172.31.27.196:22-139.178.89.65:50652.service: Deactivated successfully. Dec 13 14:34:15.351459 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:34:15.352175 systemd-logind[1705]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:34:15.353353 systemd-logind[1705]: Removed session 7. Dec 13 14:34:20.372835 systemd[1]: Started sshd@7-172.31.27.196:22-139.178.89.65:57926.service. Dec 13 14:34:20.540717 sshd[4078]: Accepted publickey for core from 139.178.89.65 port 57926 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:20.542483 sshd[4078]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:20.549124 systemd[1]: Started session-8.scope. Dec 13 14:34:20.549690 systemd-logind[1705]: New session 8 of user core. Dec 13 14:34:20.760870 sshd[4078]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:20.764641 systemd[1]: sshd@7-172.31.27.196:22-139.178.89.65:57926.service: Deactivated successfully. Dec 13 14:34:20.765604 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:34:20.766628 systemd-logind[1705]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:34:20.767724 systemd-logind[1705]: Removed session 8. Dec 13 14:34:25.791703 systemd[1]: Started sshd@8-172.31.27.196:22-139.178.89.65:57928.service. Dec 13 14:34:26.016527 sshd[4090]: Accepted publickey for core from 139.178.89.65 port 57928 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:26.018178 sshd[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:26.025118 systemd[1]: Started session-9.scope. Dec 13 14:34:26.025705 systemd-logind[1705]: New session 9 of user core. Dec 13 14:34:26.289766 sshd[4090]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:26.296300 systemd[1]: sshd@8-172.31.27.196:22-139.178.89.65:57928.service: Deactivated successfully. Dec 13 14:34:26.297985 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:34:26.299563 systemd-logind[1705]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:34:26.300985 systemd-logind[1705]: Removed session 9. Dec 13 14:34:31.318315 systemd[1]: Started sshd@9-172.31.27.196:22-139.178.89.65:57884.service. Dec 13 14:34:31.494895 sshd[4104]: Accepted publickey for core from 139.178.89.65 port 57884 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:31.497338 sshd[4104]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:31.505873 systemd[1]: Started session-10.scope. Dec 13 14:34:31.506985 systemd-logind[1705]: New session 10 of user core. Dec 13 14:34:31.732830 sshd[4104]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:31.738538 systemd[1]: sshd@9-172.31.27.196:22-139.178.89.65:57884.service: Deactivated successfully. Dec 13 14:34:31.740031 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:34:31.741071 systemd-logind[1705]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:34:31.742465 systemd-logind[1705]: Removed session 10. Dec 13 14:34:31.763306 systemd[1]: Started sshd@10-172.31.27.196:22-139.178.89.65:57900.service. Dec 13 14:34:31.940048 sshd[4116]: Accepted publickey for core from 139.178.89.65 port 57900 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:31.942118 sshd[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:31.948495 systemd-logind[1705]: New session 11 of user core. Dec 13 14:34:31.948954 systemd[1]: Started session-11.scope. Dec 13 14:34:32.294018 systemd[1]: Started sshd@11-172.31.27.196:22-139.178.89.65:57914.service. Dec 13 14:34:32.322776 sshd[4116]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:32.326600 systemd[1]: sshd@10-172.31.27.196:22-139.178.89.65:57900.service: Deactivated successfully. Dec 13 14:34:32.328149 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:34:32.330348 systemd-logind[1705]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:34:32.332945 systemd-logind[1705]: Removed session 11. Dec 13 14:34:32.479547 sshd[4125]: Accepted publickey for core from 139.178.89.65 port 57914 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:32.481745 sshd[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:32.502614 systemd[1]: Started session-12.scope. Dec 13 14:34:32.504635 systemd-logind[1705]: New session 12 of user core. Dec 13 14:34:32.793719 sshd[4125]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:32.798522 systemd-logind[1705]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:34:32.798755 systemd[1]: sshd@11-172.31.27.196:22-139.178.89.65:57914.service: Deactivated successfully. Dec 13 14:34:32.800571 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:34:32.801668 systemd-logind[1705]: Removed session 12. Dec 13 14:34:37.822342 systemd[1]: Started sshd@12-172.31.27.196:22-139.178.89.65:57930.service. Dec 13 14:34:37.994085 sshd[4137]: Accepted publickey for core from 139.178.89.65 port 57930 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:37.996841 sshd[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:38.004210 systemd[1]: Started session-13.scope. Dec 13 14:34:38.004925 systemd-logind[1705]: New session 13 of user core. Dec 13 14:34:38.236662 sshd[4137]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:38.242586 systemd[1]: sshd@12-172.31.27.196:22-139.178.89.65:57930.service: Deactivated successfully. Dec 13 14:34:38.243843 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:34:38.244830 systemd-logind[1705]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:34:38.246308 systemd-logind[1705]: Removed session 13. Dec 13 14:34:43.302311 systemd[1]: Started sshd@13-172.31.27.196:22-139.178.89.65:47658.service. Dec 13 14:34:43.485476 sshd[4149]: Accepted publickey for core from 139.178.89.65 port 47658 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:43.487203 sshd[4149]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:43.494349 systemd-logind[1705]: New session 14 of user core. Dec 13 14:34:43.494567 systemd[1]: Started session-14.scope. Dec 13 14:34:43.775444 sshd[4149]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:43.779859 systemd[1]: sshd@13-172.31.27.196:22-139.178.89.65:47658.service: Deactivated successfully. Dec 13 14:34:43.780862 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:34:43.781460 systemd-logind[1705]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:34:43.782867 systemd-logind[1705]: Removed session 14. Dec 13 14:34:48.804356 systemd[1]: Started sshd@14-172.31.27.196:22-139.178.89.65:40508.service. Dec 13 14:34:48.997426 sshd[4161]: Accepted publickey for core from 139.178.89.65 port 40508 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:48.999370 sshd[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:49.022197 systemd[1]: Started session-15.scope. Dec 13 14:34:49.023734 systemd-logind[1705]: New session 15 of user core. Dec 13 14:34:49.290682 sshd[4161]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:49.293863 systemd[1]: sshd@14-172.31.27.196:22-139.178.89.65:40508.service: Deactivated successfully. Dec 13 14:34:49.294822 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:34:49.295626 systemd-logind[1705]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:34:49.296523 systemd-logind[1705]: Removed session 15. Dec 13 14:34:49.317014 systemd[1]: Started sshd@15-172.31.27.196:22-139.178.89.65:40524.service. Dec 13 14:34:49.482778 sshd[4173]: Accepted publickey for core from 139.178.89.65 port 40524 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:49.484370 sshd[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:49.489865 systemd[1]: Started session-16.scope. Dec 13 14:34:49.490556 systemd-logind[1705]: New session 16 of user core. Dec 13 14:34:54.934560 sshd[4173]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:54.948431 systemd-logind[1705]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:34:54.948649 systemd[1]: sshd@15-172.31.27.196:22-139.178.89.65:40524.service: Deactivated successfully. Dec 13 14:34:54.949616 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:34:54.952398 systemd-logind[1705]: Removed session 16. Dec 13 14:34:54.969086 systemd[1]: Started sshd@16-172.31.27.196:22-139.178.89.65:40536.service. Dec 13 14:34:55.215779 sshd[4184]: Accepted publickey for core from 139.178.89.65 port 40536 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:55.216997 sshd[4184]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:55.223488 systemd[1]: Started session-17.scope. Dec 13 14:34:55.224390 systemd-logind[1705]: New session 17 of user core. Dec 13 14:34:57.636222 sshd[4184]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:57.650123 systemd[1]: sshd@16-172.31.27.196:22-139.178.89.65:40536.service: Deactivated successfully. Dec 13 14:34:57.651768 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:34:57.658729 systemd-logind[1705]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:34:57.668312 systemd-logind[1705]: Removed session 17. Dec 13 14:34:57.679737 systemd[1]: Started sshd@17-172.31.27.196:22-139.178.89.65:40548.service. Dec 13 14:34:57.846018 sshd[4209]: Accepted publickey for core from 139.178.89.65 port 40548 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:57.846760 sshd[4209]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:57.852326 systemd[1]: Started session-18.scope. Dec 13 14:34:57.854071 systemd-logind[1705]: New session 18 of user core. Dec 13 14:34:58.296610 sshd[4209]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:58.301160 systemd-logind[1705]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:34:58.302403 systemd[1]: sshd@17-172.31.27.196:22-139.178.89.65:40548.service: Deactivated successfully. Dec 13 14:34:58.304057 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:34:58.305877 systemd-logind[1705]: Removed session 18. Dec 13 14:34:58.325520 systemd[1]: Started sshd@18-172.31.27.196:22-139.178.89.65:58974.service. Dec 13 14:34:58.489697 sshd[4219]: Accepted publickey for core from 139.178.89.65 port 58974 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:34:58.493081 sshd[4219]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:34:58.503410 systemd[1]: Started session-19.scope. Dec 13 14:34:58.503410 systemd-logind[1705]: New session 19 of user core. Dec 13 14:34:58.730541 sshd[4219]: pam_unix(sshd:session): session closed for user core Dec 13 14:34:58.735212 systemd[1]: sshd@18-172.31.27.196:22-139.178.89.65:58974.service: Deactivated successfully. Dec 13 14:34:58.737137 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:34:58.738605 systemd-logind[1705]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:34:58.740152 systemd-logind[1705]: Removed session 19. Dec 13 14:35:03.761722 systemd[1]: Started sshd@19-172.31.27.196:22-139.178.89.65:58980.service. Dec 13 14:35:03.942645 sshd[4231]: Accepted publickey for core from 139.178.89.65 port 58980 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:03.944158 sshd[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:03.952440 systemd-logind[1705]: New session 20 of user core. Dec 13 14:35:03.952807 systemd[1]: Started session-20.scope. Dec 13 14:35:04.238140 sshd[4231]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:04.243940 systemd[1]: sshd@19-172.31.27.196:22-139.178.89.65:58980.service: Deactivated successfully. Dec 13 14:35:04.246031 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:35:04.247185 systemd-logind[1705]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:35:04.248727 systemd-logind[1705]: Removed session 20. Dec 13 14:35:09.269750 systemd[1]: Started sshd@20-172.31.27.196:22-139.178.89.65:33714.service. Dec 13 14:35:09.447224 sshd[4246]: Accepted publickey for core from 139.178.89.65 port 33714 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:09.450106 sshd[4246]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:09.456159 systemd[1]: Started session-21.scope. Dec 13 14:35:09.457302 systemd-logind[1705]: New session 21 of user core. Dec 13 14:35:09.648623 sshd[4246]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:09.652032 systemd[1]: sshd@20-172.31.27.196:22-139.178.89.65:33714.service: Deactivated successfully. Dec 13 14:35:09.653428 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:35:09.654352 systemd-logind[1705]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:35:09.655599 systemd-logind[1705]: Removed session 21. Dec 13 14:35:14.687504 systemd[1]: Started sshd@21-172.31.27.196:22-139.178.89.65:33716.service. Dec 13 14:35:14.864647 sshd[4258]: Accepted publickey for core from 139.178.89.65 port 33716 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:14.866535 sshd[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:14.872786 systemd[1]: Started session-22.scope. Dec 13 14:35:14.873584 systemd-logind[1705]: New session 22 of user core. Dec 13 14:35:15.084388 sshd[4258]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:15.088877 systemd-logind[1705]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:35:15.089099 systemd[1]: sshd@21-172.31.27.196:22-139.178.89.65:33716.service: Deactivated successfully. Dec 13 14:35:15.090319 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:35:15.091311 systemd-logind[1705]: Removed session 22. Dec 13 14:35:20.113307 systemd[1]: Started sshd@22-172.31.27.196:22-139.178.89.65:47300.service. Dec 13 14:35:20.280542 sshd[4272]: Accepted publickey for core from 139.178.89.65 port 47300 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:20.282800 sshd[4272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:20.289545 systemd[1]: Started session-23.scope. Dec 13 14:35:20.290835 systemd-logind[1705]: New session 23 of user core. Dec 13 14:35:20.506316 sshd[4272]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:20.511665 systemd[1]: sshd@22-172.31.27.196:22-139.178.89.65:47300.service: Deactivated successfully. Dec 13 14:35:20.512922 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:35:20.514457 systemd-logind[1705]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:35:20.516183 systemd-logind[1705]: Removed session 23. Dec 13 14:35:20.540065 systemd[1]: Started sshd@23-172.31.27.196:22-139.178.89.65:47308.service. Dec 13 14:35:20.709899 sshd[4284]: Accepted publickey for core from 139.178.89.65 port 47308 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:20.711610 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:20.724306 systemd[1]: Started session-24.scope. Dec 13 14:35:20.724806 systemd-logind[1705]: New session 24 of user core. Dec 13 14:35:23.969731 env[1719]: time="2024-12-13T14:35:23.965879832Z" level=info msg="StopContainer for \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\" with timeout 30 (s)" Dec 13 14:35:23.969731 env[1719]: time="2024-12-13T14:35:23.966406169Z" level=info msg="Stop container \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\" with signal terminated" Dec 13 14:35:24.020763 systemd[1]: cri-containerd-299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf.scope: Deactivated successfully. Dec 13 14:35:24.025848 env[1719]: time="2024-12-13T14:35:24.025788995Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:35:24.036189 env[1719]: time="2024-12-13T14:35:24.036152151Z" level=info msg="StopContainer for \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\" with timeout 2 (s)" Dec 13 14:35:24.036807 env[1719]: time="2024-12-13T14:35:24.036766382Z" level=info msg="Stop container \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\" with signal terminated" Dec 13 14:35:24.050343 systemd-networkd[1441]: lxc_health: Link DOWN Dec 13 14:35:24.050353 systemd-networkd[1441]: lxc_health: Lost carrier Dec 13 14:35:24.099094 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf-rootfs.mount: Deactivated successfully. Dec 13 14:35:24.185769 env[1719]: time="2024-12-13T14:35:24.181980526Z" level=info msg="shim disconnected" id=299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf Dec 13 14:35:24.185769 env[1719]: time="2024-12-13T14:35:24.182048935Z" level=warning msg="cleaning up after shim disconnected" id=299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf namespace=k8s.io Dec 13 14:35:24.185769 env[1719]: time="2024-12-13T14:35:24.182061083Z" level=info msg="cleaning up dead shim" Dec 13 14:35:24.189178 systemd[1]: cri-containerd-aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc.scope: Deactivated successfully. Dec 13 14:35:24.189650 systemd[1]: cri-containerd-aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc.scope: Consumed 8.392s CPU time. Dec 13 14:35:24.197917 env[1719]: time="2024-12-13T14:35:24.197865069Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4341 runtime=io.containerd.runc.v2\n" Dec 13 14:35:24.200706 env[1719]: time="2024-12-13T14:35:24.200655327Z" level=info msg="StopContainer for \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\" returns successfully" Dec 13 14:35:24.201674 env[1719]: time="2024-12-13T14:35:24.201637258Z" level=info msg="StopPodSandbox for \"879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41\"" Dec 13 14:35:24.201792 env[1719]: time="2024-12-13T14:35:24.201733803Z" level=info msg="Container to stop \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:24.204773 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41-shm.mount: Deactivated successfully. Dec 13 14:35:24.216878 systemd[1]: cri-containerd-879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41.scope: Deactivated successfully. Dec 13 14:35:24.240762 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc-rootfs.mount: Deactivated successfully. Dec 13 14:35:24.267397 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41-rootfs.mount: Deactivated successfully. Dec 13 14:35:24.270896 env[1719]: time="2024-12-13T14:35:24.270849794Z" level=info msg="shim disconnected" id=aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc Dec 13 14:35:24.271094 env[1719]: time="2024-12-13T14:35:24.270901079Z" level=warning msg="cleaning up after shim disconnected" id=aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc namespace=k8s.io Dec 13 14:35:24.271094 env[1719]: time="2024-12-13T14:35:24.270914472Z" level=info msg="cleaning up dead shim" Dec 13 14:35:24.276493 env[1719]: time="2024-12-13T14:35:24.276444289Z" level=info msg="shim disconnected" id=879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41 Dec 13 14:35:24.277335 env[1719]: time="2024-12-13T14:35:24.277306911Z" level=warning msg="cleaning up after shim disconnected" id=879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41 namespace=k8s.io Dec 13 14:35:24.277642 env[1719]: time="2024-12-13T14:35:24.277617558Z" level=info msg="cleaning up dead shim" Dec 13 14:35:24.288832 env[1719]: time="2024-12-13T14:35:24.288772446Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4389 runtime=io.containerd.runc.v2\n" Dec 13 14:35:24.293715 env[1719]: time="2024-12-13T14:35:24.293657930Z" level=info msg="StopContainer for \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\" returns successfully" Dec 13 14:35:24.294283 env[1719]: time="2024-12-13T14:35:24.294240678Z" level=info msg="StopPodSandbox for \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\"" Dec 13 14:35:24.294394 env[1719]: time="2024-12-13T14:35:24.294314528Z" level=info msg="Container to stop \"d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:24.294394 env[1719]: time="2024-12-13T14:35:24.294334932Z" level=info msg="Container to stop \"3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:24.294394 env[1719]: time="2024-12-13T14:35:24.294354757Z" level=info msg="Container to stop \"14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:24.294394 env[1719]: time="2024-12-13T14:35:24.294371876Z" level=info msg="Container to stop \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:24.294394 env[1719]: time="2024-12-13T14:35:24.294387600Z" level=info msg="Container to stop \"7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:24.298473 env[1719]: time="2024-12-13T14:35:24.298434247Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4398 runtime=io.containerd.runc.v2\n" Dec 13 14:35:24.298960 env[1719]: time="2024-12-13T14:35:24.298923194Z" level=info msg="TearDown network for sandbox \"879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41\" successfully" Dec 13 14:35:24.299056 env[1719]: time="2024-12-13T14:35:24.298960045Z" level=info msg="StopPodSandbox for \"879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41\" returns successfully" Dec 13 14:35:24.306690 systemd[1]: cri-containerd-29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34.scope: Deactivated successfully. Dec 13 14:35:24.354756 env[1719]: time="2024-12-13T14:35:24.354701136Z" level=info msg="shim disconnected" id=29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34 Dec 13 14:35:24.355697 env[1719]: time="2024-12-13T14:35:24.355632666Z" level=warning msg="cleaning up after shim disconnected" id=29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34 namespace=k8s.io Dec 13 14:35:24.355859 env[1719]: time="2024-12-13T14:35:24.355839119Z" level=info msg="cleaning up dead shim" Dec 13 14:35:24.369145 env[1719]: time="2024-12-13T14:35:24.369093169Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4434 runtime=io.containerd.runc.v2\n" Dec 13 14:35:24.369599 env[1719]: time="2024-12-13T14:35:24.369480915Z" level=info msg="TearDown network for sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" successfully" Dec 13 14:35:24.369599 env[1719]: time="2024-12-13T14:35:24.369591832Z" level=info msg="StopPodSandbox for \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" returns successfully" Dec 13 14:35:24.423013 kubelet[2538]: I1213 14:35:24.422972 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f97w4\" (UniqueName: \"kubernetes.io/projected/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-kube-api-access-f97w4\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.423969 kubelet[2538]: I1213 14:35:24.423937 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-hubble-tls\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.423969 kubelet[2538]: I1213 14:35:24.423977 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-hostproc\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.424117 kubelet[2538]: I1213 14:35:24.424009 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-clustermesh-secrets\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.424117 kubelet[2538]: I1213 14:35:24.424036 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c30d8fd2-a95e-4685-9034-0d7dad787d2d-cilium-config-path\") pod \"c30d8fd2-a95e-4685-9034-0d7dad787d2d\" (UID: \"c30d8fd2-a95e-4685-9034-0d7dad787d2d\") " Dec 13 14:35:24.424117 kubelet[2538]: I1213 14:35:24.424061 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cilium-run\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.424117 kubelet[2538]: I1213 14:35:24.424082 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-etc-cni-netd\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.424117 kubelet[2538]: I1213 14:35:24.424108 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cilium-config-path\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.424626 kubelet[2538]: I1213 14:35:24.424135 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vwdq\" (UniqueName: \"kubernetes.io/projected/c30d8fd2-a95e-4685-9034-0d7dad787d2d-kube-api-access-5vwdq\") pod \"c30d8fd2-a95e-4685-9034-0d7dad787d2d\" (UID: \"c30d8fd2-a95e-4685-9034-0d7dad787d2d\") " Dec 13 14:35:24.424626 kubelet[2538]: I1213 14:35:24.424160 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cni-path\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.424626 kubelet[2538]: I1213 14:35:24.424183 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-lib-modules\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.424626 kubelet[2538]: I1213 14:35:24.424208 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cilium-cgroup\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.424626 kubelet[2538]: I1213 14:35:24.424230 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-bpf-maps\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.424626 kubelet[2538]: I1213 14:35:24.424382 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-host-proc-sys-kernel\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.424824 kubelet[2538]: I1213 14:35:24.424415 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-xtables-lock\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.424824 kubelet[2538]: I1213 14:35:24.424437 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-host-proc-sys-net\") pod \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\" (UID: \"7a7eaa7e-ea53-4522-94db-4337aa4eb5ca\") " Dec 13 14:35:24.436675 kubelet[2538]: I1213 14:35:24.429241 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-hostproc" (OuterVolumeSpecName: "hostproc") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.445430 kubelet[2538]: I1213 14:35:24.445378 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c30d8fd2-a95e-4685-9034-0d7dad787d2d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c30d8fd2-a95e-4685-9034-0d7dad787d2d" (UID: "c30d8fd2-a95e-4685-9034-0d7dad787d2d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:35:24.445810 kubelet[2538]: I1213 14:35:24.445783 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.445936 kubelet[2538]: I1213 14:35:24.445922 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.447809 kubelet[2538]: I1213 14:35:24.447777 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:35:24.448072 kubelet[2538]: I1213 14:35:24.448049 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-kube-api-access-f97w4" (OuterVolumeSpecName: "kube-api-access-f97w4") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "kube-api-access-f97w4". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:35:24.448184 kubelet[2538]: I1213 14:35:24.431490 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.448351 kubelet[2538]: I1213 14:35:24.448333 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:35:24.448713 kubelet[2538]: I1213 14:35:24.448692 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.448894 kubelet[2538]: I1213 14:35:24.448860 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cni-path" (OuterVolumeSpecName: "cni-path") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.449121 kubelet[2538]: I1213 14:35:24.449104 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.449234 kubelet[2538]: I1213 14:35:24.449208 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.449372 kubelet[2538]: I1213 14:35:24.449358 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.449481 kubelet[2538]: I1213 14:35:24.449468 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:24.453948 kubelet[2538]: I1213 14:35:24.453893 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c30d8fd2-a95e-4685-9034-0d7dad787d2d-kube-api-access-5vwdq" (OuterVolumeSpecName: "kube-api-access-5vwdq") pod "c30d8fd2-a95e-4685-9034-0d7dad787d2d" (UID: "c30d8fd2-a95e-4685-9034-0d7dad787d2d"). InnerVolumeSpecName "kube-api-access-5vwdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:35:24.455415 kubelet[2538]: I1213 14:35:24.455382 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" (UID: "7a7eaa7e-ea53-4522-94db-4337aa4eb5ca"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:35:24.497209 systemd[1]: Removed slice kubepods-besteffort-podc30d8fd2_a95e_4685_9034_0d7dad787d2d.slice. Dec 13 14:35:24.514359 kubelet[2538]: I1213 14:35:24.514334 2538 scope.go:117] "RemoveContainer" containerID="aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc" Dec 13 14:35:24.516228 systemd[1]: Removed slice kubepods-burstable-pod7a7eaa7e_ea53_4522_94db_4337aa4eb5ca.slice. Dec 13 14:35:24.516385 systemd[1]: kubepods-burstable-pod7a7eaa7e_ea53_4522_94db_4337aa4eb5ca.slice: Consumed 8.526s CPU time. Dec 13 14:35:24.522710 env[1719]: time="2024-12-13T14:35:24.522663785Z" level=info msg="RemoveContainer for \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\"" Dec 13 14:35:24.525645 kubelet[2538]: I1213 14:35:24.525480 2538 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-hubble-tls\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527346 kubelet[2538]: I1213 14:35:24.525872 2538 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-hostproc\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527346 kubelet[2538]: I1213 14:35:24.525891 2538 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-f97w4\" (UniqueName: \"kubernetes.io/projected/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-kube-api-access-f97w4\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527346 kubelet[2538]: I1213 14:35:24.525904 2538 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-clustermesh-secrets\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527346 kubelet[2538]: I1213 14:35:24.525918 2538 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-etc-cni-netd\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527346 kubelet[2538]: I1213 14:35:24.525931 2538 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c30d8fd2-a95e-4685-9034-0d7dad787d2d-cilium-config-path\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527346 kubelet[2538]: I1213 14:35:24.525942 2538 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cilium-run\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527346 kubelet[2538]: I1213 14:35:24.525953 2538 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cilium-config-path\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527346 kubelet[2538]: I1213 14:35:24.525971 2538 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5vwdq\" (UniqueName: \"kubernetes.io/projected/c30d8fd2-a95e-4685-9034-0d7dad787d2d-kube-api-access-5vwdq\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527931 kubelet[2538]: I1213 14:35:24.526001 2538 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cni-path\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527931 kubelet[2538]: I1213 14:35:24.526014 2538 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-lib-modules\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527931 kubelet[2538]: I1213 14:35:24.526027 2538 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-bpf-maps\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527931 kubelet[2538]: I1213 14:35:24.526038 2538 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-host-proc-sys-kernel\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527931 kubelet[2538]: I1213 14:35:24.526055 2538 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-cilium-cgroup\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527931 kubelet[2538]: I1213 14:35:24.526067 2538 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-host-proc-sys-net\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.527931 kubelet[2538]: I1213 14:35:24.526078 2538 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca-xtables-lock\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:24.532452 env[1719]: time="2024-12-13T14:35:24.532403815Z" level=info msg="RemoveContainer for \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\" returns successfully" Dec 13 14:35:24.532980 kubelet[2538]: I1213 14:35:24.532941 2538 scope.go:117] "RemoveContainer" containerID="14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e" Dec 13 14:35:24.535005 env[1719]: time="2024-12-13T14:35:24.534698334Z" level=info msg="RemoveContainer for \"14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e\"" Dec 13 14:35:24.540177 env[1719]: time="2024-12-13T14:35:24.540133169Z" level=info msg="RemoveContainer for \"14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e\" returns successfully" Dec 13 14:35:24.543085 kubelet[2538]: I1213 14:35:24.543054 2538 scope.go:117] "RemoveContainer" containerID="3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615" Dec 13 14:35:24.548300 env[1719]: time="2024-12-13T14:35:24.548226202Z" level=info msg="RemoveContainer for \"3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615\"" Dec 13 14:35:24.554229 env[1719]: time="2024-12-13T14:35:24.554182936Z" level=info msg="RemoveContainer for \"3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615\" returns successfully" Dec 13 14:35:24.554443 kubelet[2538]: I1213 14:35:24.554415 2538 scope.go:117] "RemoveContainer" containerID="d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb" Dec 13 14:35:24.555852 env[1719]: time="2024-12-13T14:35:24.555812648Z" level=info msg="RemoveContainer for \"d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb\"" Dec 13 14:35:24.563706 env[1719]: time="2024-12-13T14:35:24.563488302Z" level=info msg="RemoveContainer for \"d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb\" returns successfully" Dec 13 14:35:24.564124 kubelet[2538]: I1213 14:35:24.564100 2538 scope.go:117] "RemoveContainer" containerID="7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83" Dec 13 14:35:24.567375 env[1719]: time="2024-12-13T14:35:24.567346340Z" level=info msg="RemoveContainer for \"7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83\"" Dec 13 14:35:24.572768 env[1719]: time="2024-12-13T14:35:24.572739645Z" level=info msg="RemoveContainer for \"7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83\" returns successfully" Dec 13 14:35:24.573042 kubelet[2538]: I1213 14:35:24.573024 2538 scope.go:117] "RemoveContainer" containerID="aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc" Dec 13 14:35:24.573404 env[1719]: time="2024-12-13T14:35:24.573336746Z" level=error msg="ContainerStatus for \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\": not found" Dec 13 14:35:24.576743 kubelet[2538]: E1213 14:35:24.576712 2538 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\": not found" containerID="aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc" Dec 13 14:35:24.576943 kubelet[2538]: I1213 14:35:24.576859 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc"} err="failed to get container status \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\": rpc error: code = NotFound desc = an error occurred when try to find container \"aa0c2096b1c1a47f26830743741cf2aa07384dcf48dd7fa5942b7bf6e8523fbc\": not found" Dec 13 14:35:24.577015 kubelet[2538]: I1213 14:35:24.577006 2538 scope.go:117] "RemoveContainer" containerID="14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e" Dec 13 14:35:24.577345 env[1719]: time="2024-12-13T14:35:24.577296303Z" level=error msg="ContainerStatus for \"14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e\": not found" Dec 13 14:35:24.577644 kubelet[2538]: E1213 14:35:24.577625 2538 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e\": not found" containerID="14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e" Dec 13 14:35:24.577759 kubelet[2538]: I1213 14:35:24.577743 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e"} err="failed to get container status \"14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e\": rpc error: code = NotFound desc = an error occurred when try to find container \"14247450064682040a75e4eae677c46a31e3d95a4846871e2baa8a70c151538e\": not found" Dec 13 14:35:24.577846 kubelet[2538]: I1213 14:35:24.577835 2538 scope.go:117] "RemoveContainer" containerID="3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615" Dec 13 14:35:24.578138 env[1719]: time="2024-12-13T14:35:24.578096784Z" level=error msg="ContainerStatus for \"3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615\": not found" Dec 13 14:35:24.578365 kubelet[2538]: E1213 14:35:24.578334 2538 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615\": not found" containerID="3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615" Dec 13 14:35:24.578443 kubelet[2538]: I1213 14:35:24.578373 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615"} err="failed to get container status \"3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615\": rpc error: code = NotFound desc = an error occurred when try to find container \"3023b7ece4e6713df0087f50762a1490c2e91064a9b027568839da562bf0d615\": not found" Dec 13 14:35:24.578443 kubelet[2538]: I1213 14:35:24.578393 2538 scope.go:117] "RemoveContainer" containerID="d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb" Dec 13 14:35:24.578869 env[1719]: time="2024-12-13T14:35:24.578820360Z" level=error msg="ContainerStatus for \"d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb\": not found" Dec 13 14:35:24.579092 kubelet[2538]: E1213 14:35:24.579069 2538 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb\": not found" containerID="d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb" Dec 13 14:35:24.579173 kubelet[2538]: I1213 14:35:24.579113 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb"} err="failed to get container status \"d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb\": rpc error: code = NotFound desc = an error occurred when try to find container \"d0adbf1fc1fe3cbedaed9dbc2e08130b59608fe36e795bc6fae3df568429fbbb\": not found" Dec 13 14:35:24.579173 kubelet[2538]: I1213 14:35:24.579147 2538 scope.go:117] "RemoveContainer" containerID="7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83" Dec 13 14:35:24.579483 env[1719]: time="2024-12-13T14:35:24.579429402Z" level=error msg="ContainerStatus for \"7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83\": not found" Dec 13 14:35:24.579803 kubelet[2538]: E1213 14:35:24.579785 2538 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83\": not found" containerID="7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83" Dec 13 14:35:24.579922 kubelet[2538]: I1213 14:35:24.579904 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83"} err="failed to get container status \"7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83\": rpc error: code = NotFound desc = an error occurred when try to find container \"7c1caab38698d4df1cb1cde4e6409c6eb54e1ffdc0652229d8dae8d3303f3f83\": not found" Dec 13 14:35:24.579995 kubelet[2538]: I1213 14:35:24.579987 2538 scope.go:117] "RemoveContainer" containerID="299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf" Dec 13 14:35:24.581142 env[1719]: time="2024-12-13T14:35:24.581112933Z" level=info msg="RemoveContainer for \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\"" Dec 13 14:35:24.586828 env[1719]: time="2024-12-13T14:35:24.586787918Z" level=info msg="RemoveContainer for \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\" returns successfully" Dec 13 14:35:24.587071 kubelet[2538]: I1213 14:35:24.587053 2538 scope.go:117] "RemoveContainer" containerID="299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf" Dec 13 14:35:24.587402 env[1719]: time="2024-12-13T14:35:24.587351256Z" level=error msg="ContainerStatus for \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\": not found" Dec 13 14:35:24.587741 kubelet[2538]: E1213 14:35:24.587721 2538 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\": not found" containerID="299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf" Dec 13 14:35:24.587851 kubelet[2538]: I1213 14:35:24.587831 2538 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf"} err="failed to get container status \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"299397f2147fe471c8d01362c05e2a0db1484f86a7779520a91a61f3fd3892cf\": not found" Dec 13 14:35:24.966181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34-rootfs.mount: Deactivated successfully. Dec 13 14:35:24.966325 systemd[1]: var-lib-kubelet-pods-c30d8fd2\x2da95e\x2d4685\x2d9034\x2d0d7dad787d2d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5vwdq.mount: Deactivated successfully. Dec 13 14:35:24.966412 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34-shm.mount: Deactivated successfully. Dec 13 14:35:24.966492 systemd[1]: var-lib-kubelet-pods-7a7eaa7e\x2dea53\x2d4522\x2d94db\x2d4337aa4eb5ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df97w4.mount: Deactivated successfully. Dec 13 14:35:24.966579 systemd[1]: var-lib-kubelet-pods-7a7eaa7e\x2dea53\x2d4522\x2d94db\x2d4337aa4eb5ca-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:35:24.966662 systemd[1]: var-lib-kubelet-pods-7a7eaa7e\x2dea53\x2d4522\x2d94db\x2d4337aa4eb5ca-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:35:25.153613 kubelet[2538]: E1213 14:35:25.153567 2538 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:35:25.892904 sshd[4284]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:25.905722 systemd[1]: sshd@23-172.31.27.196:22-139.178.89.65:47308.service: Deactivated successfully. Dec 13 14:35:25.910551 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 14:35:25.911340 systemd-logind[1705]: Session 24 logged out. Waiting for processes to exit. Dec 13 14:35:25.949420 systemd[1]: Started sshd@24-172.31.27.196:22-139.178.89.65:47316.service. Dec 13 14:35:25.950839 systemd-logind[1705]: Removed session 24. Dec 13 14:35:25.980248 kubelet[2538]: I1213 14:35:25.980209 2538 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" path="/var/lib/kubelet/pods/7a7eaa7e-ea53-4522-94db-4337aa4eb5ca/volumes" Dec 13 14:35:25.981509 kubelet[2538]: I1213 14:35:25.981483 2538 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c30d8fd2-a95e-4685-9034-0d7dad787d2d" path="/var/lib/kubelet/pods/c30d8fd2-a95e-4685-9034-0d7dad787d2d/volumes" Dec 13 14:35:26.167468 sshd[4452]: Accepted publickey for core from 139.178.89.65 port 47316 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:26.168696 sshd[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:26.175365 systemd[1]: Started session-25.scope. Dec 13 14:35:26.175370 systemd-logind[1705]: New session 25 of user core. Dec 13 14:35:26.996811 sshd[4452]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:27.000982 systemd-logind[1705]: Session 25 logged out. Waiting for processes to exit. Dec 13 14:35:27.001453 systemd[1]: sshd@24-172.31.27.196:22-139.178.89.65:47316.service: Deactivated successfully. Dec 13 14:35:27.002428 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 14:35:27.004275 systemd-logind[1705]: Removed session 25. Dec 13 14:35:27.010764 kubelet[2538]: E1213 14:35:27.010726 2538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" containerName="cilium-agent" Dec 13 14:35:27.011409 kubelet[2538]: E1213 14:35:27.010775 2538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" containerName="mount-cgroup" Dec 13 14:35:27.011409 kubelet[2538]: E1213 14:35:27.010786 2538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" containerName="mount-bpf-fs" Dec 13 14:35:27.011409 kubelet[2538]: E1213 14:35:27.010794 2538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="c30d8fd2-a95e-4685-9034-0d7dad787d2d" containerName="cilium-operator" Dec 13 14:35:27.011409 kubelet[2538]: E1213 14:35:27.010801 2538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" containerName="clean-cilium-state" Dec 13 14:35:27.011409 kubelet[2538]: E1213 14:35:27.010810 2538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" containerName="apply-sysctl-overwrites" Dec 13 14:35:27.011409 kubelet[2538]: I1213 14:35:27.010858 2538 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a7eaa7e-ea53-4522-94db-4337aa4eb5ca" containerName="cilium-agent" Dec 13 14:35:27.011409 kubelet[2538]: I1213 14:35:27.010875 2538 memory_manager.go:354] "RemoveStaleState removing state" podUID="c30d8fd2-a95e-4685-9034-0d7dad787d2d" containerName="cilium-operator" Dec 13 14:35:27.030579 systemd[1]: Started sshd@25-172.31.27.196:22-139.178.89.65:47332.service. Dec 13 14:35:27.051009 kubelet[2538]: I1213 14:35:27.050966 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8670665d-ce5b-4596-9ea5-5b35a5fac30b-clustermesh-secrets\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051009 kubelet[2538]: I1213 14:35:27.051012 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8670665d-ce5b-4596-9ea5-5b35a5fac30b-hubble-tls\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051235 kubelet[2538]: I1213 14:35:27.051043 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-cgroup\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051235 kubelet[2538]: I1213 14:35:27.051068 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-hostproc\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051235 kubelet[2538]: I1213 14:35:27.051088 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-bpf-maps\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051235 kubelet[2538]: I1213 14:35:27.051107 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-etc-cni-netd\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051235 kubelet[2538]: I1213 14:35:27.051127 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-xtables-lock\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051235 kubelet[2538]: I1213 14:35:27.051148 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-host-proc-sys-net\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051598 kubelet[2538]: I1213 14:35:27.051168 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dc7gd\" (UniqueName: \"kubernetes.io/projected/8670665d-ce5b-4596-9ea5-5b35a5fac30b-kube-api-access-dc7gd\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051598 kubelet[2538]: I1213 14:35:27.051191 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-run\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051598 kubelet[2538]: I1213 14:35:27.051216 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cni-path\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051598 kubelet[2538]: I1213 14:35:27.051241 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-config-path\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051598 kubelet[2538]: I1213 14:35:27.051286 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-ipsec-secrets\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051807 kubelet[2538]: I1213 14:35:27.051311 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-host-proc-sys-kernel\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.051807 kubelet[2538]: I1213 14:35:27.051335 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-lib-modules\") pod \"cilium-l962m\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " pod="kube-system/cilium-l962m" Dec 13 14:35:27.060517 systemd[1]: Created slice kubepods-burstable-pod8670665d_ce5b_4596_9ea5_5b35a5fac30b.slice. Dec 13 14:35:27.216474 sshd[4464]: Accepted publickey for core from 139.178.89.65 port 47332 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:27.218077 sshd[4464]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:27.233373 systemd-logind[1705]: New session 26 of user core. Dec 13 14:35:27.233937 systemd[1]: Started session-26.scope. Dec 13 14:35:27.371848 env[1719]: time="2024-12-13T14:35:27.371729501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l962m,Uid:8670665d-ce5b-4596-9ea5-5b35a5fac30b,Namespace:kube-system,Attempt:0,}" Dec 13 14:35:27.407849 env[1719]: time="2024-12-13T14:35:27.402233655Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:35:27.407849 env[1719]: time="2024-12-13T14:35:27.402299069Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:35:27.407849 env[1719]: time="2024-12-13T14:35:27.402316670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:35:27.407849 env[1719]: time="2024-12-13T14:35:27.402496858Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f pid=4483 runtime=io.containerd.runc.v2 Dec 13 14:35:27.421492 systemd[1]: Started cri-containerd-8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f.scope. Dec 13 14:35:27.475958 env[1719]: time="2024-12-13T14:35:27.475914156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l962m,Uid:8670665d-ce5b-4596-9ea5-5b35a5fac30b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f\"" Dec 13 14:35:27.479535 env[1719]: time="2024-12-13T14:35:27.479469525Z" level=info msg="CreateContainer within sandbox \"8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:35:27.509862 env[1719]: time="2024-12-13T14:35:27.509739404Z" level=info msg="CreateContainer within sandbox \"8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e\"" Dec 13 14:35:27.510670 env[1719]: time="2024-12-13T14:35:27.510639958Z" level=info msg="StartContainer for \"be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e\"" Dec 13 14:35:27.540899 systemd[1]: Started cri-containerd-be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e.scope. Dec 13 14:35:27.554531 systemd[1]: cri-containerd-be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e.scope: Deactivated successfully. Dec 13 14:35:27.589493 sshd[4464]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:27.591587 env[1719]: time="2024-12-13T14:35:27.591208097Z" level=info msg="shim disconnected" id=be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e Dec 13 14:35:27.591751 env[1719]: time="2024-12-13T14:35:27.591638984Z" level=warning msg="cleaning up after shim disconnected" id=be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e namespace=k8s.io Dec 13 14:35:27.591751 env[1719]: time="2024-12-13T14:35:27.591658601Z" level=info msg="cleaning up dead shim" Dec 13 14:35:27.594527 systemd[1]: sshd@25-172.31.27.196:22-139.178.89.65:47332.service: Deactivated successfully. Dec 13 14:35:27.595541 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 14:35:27.600180 systemd-logind[1705]: Session 26 logged out. Waiting for processes to exit. Dec 13 14:35:27.602870 systemd-logind[1705]: Removed session 26. Dec 13 14:35:27.616403 systemd[1]: Started sshd@26-172.31.27.196:22-139.178.89.65:47346.service. Dec 13 14:35:27.635082 env[1719]: time="2024-12-13T14:35:27.634952543Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4540 runtime=io.containerd.runc.v2\ntime=\"2024-12-13T14:35:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Dec 13 14:35:27.636095 env[1719]: time="2024-12-13T14:35:27.635966326Z" level=error msg="copy shim log" error="read /proc/self/fd/34: file already closed" Dec 13 14:35:27.636361 env[1719]: time="2024-12-13T14:35:27.636317186Z" level=error msg="Failed to pipe stdout of container \"be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e\"" error="reading from a closed fifo" Dec 13 14:35:27.636469 env[1719]: time="2024-12-13T14:35:27.636327137Z" level=error msg="Failed to pipe stderr of container \"be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e\"" error="reading from a closed fifo" Dec 13 14:35:27.639743 env[1719]: time="2024-12-13T14:35:27.639671576Z" level=error msg="StartContainer for \"be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Dec 13 14:35:27.641538 kubelet[2538]: E1213 14:35:27.641389 2538 log.go:32] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e" Dec 13 14:35:27.644887 kubelet[2538]: E1213 14:35:27.644812 2538 kuberuntime_manager.go:1272] "Unhandled Error" err=< Dec 13 14:35:27.644887 kubelet[2538]: init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Dec 13 14:35:27.644887 kubelet[2538]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Dec 13 14:35:27.644887 kubelet[2538]: rm /hostbin/cilium-mount Dec 13 14:35:27.645114 kubelet[2538]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dc7gd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-l962m_kube-system(8670665d-ce5b-4596-9ea5-5b35a5fac30b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Dec 13 14:35:27.645114 kubelet[2538]: > logger="UnhandledError" Dec 13 14:35:27.646847 kubelet[2538]: E1213 14:35:27.646793 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-l962m" podUID="8670665d-ce5b-4596-9ea5-5b35a5fac30b" Dec 13 14:35:27.786689 sshd[4549]: Accepted publickey for core from 139.178.89.65 port 47346 ssh2: RSA SHA256:kjZzhLCfrUb6HP3VZI7nfxYjuxqu9bKyQNrCGPkPDkk Dec 13 14:35:27.788360 sshd[4549]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:35:27.793349 systemd-logind[1705]: New session 27 of user core. Dec 13 14:35:27.793758 systemd[1]: Started session-27.scope. Dec 13 14:35:27.976127 kubelet[2538]: E1213 14:35:27.976010 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-q8mr5" podUID="b4126957-fc80-4ca3-9602-c56dca4157ca" Dec 13 14:35:28.520399 env[1719]: time="2024-12-13T14:35:28.520355272Z" level=info msg="StopPodSandbox for \"8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f\"" Dec 13 14:35:28.520911 env[1719]: time="2024-12-13T14:35:28.520435382Z" level=info msg="Container to stop \"be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 14:35:28.528284 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f-shm.mount: Deactivated successfully. Dec 13 14:35:28.554907 systemd[1]: cri-containerd-8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f.scope: Deactivated successfully. Dec 13 14:35:28.590567 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f-rootfs.mount: Deactivated successfully. Dec 13 14:35:28.609024 env[1719]: time="2024-12-13T14:35:28.608970852Z" level=info msg="shim disconnected" id=8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f Dec 13 14:35:28.609024 env[1719]: time="2024-12-13T14:35:28.609026182Z" level=warning msg="cleaning up after shim disconnected" id=8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f namespace=k8s.io Dec 13 14:35:28.609345 env[1719]: time="2024-12-13T14:35:28.609038337Z" level=info msg="cleaning up dead shim" Dec 13 14:35:28.619218 env[1719]: time="2024-12-13T14:35:28.619170352Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4580 runtime=io.containerd.runc.v2\n" Dec 13 14:35:28.619676 env[1719]: time="2024-12-13T14:35:28.619602223Z" level=info msg="TearDown network for sandbox \"8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f\" successfully" Dec 13 14:35:28.619777 env[1719]: time="2024-12-13T14:35:28.619674043Z" level=info msg="StopPodSandbox for \"8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f\" returns successfully" Dec 13 14:35:28.667809 kubelet[2538]: I1213 14:35:28.667757 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-bpf-maps\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.667809 kubelet[2538]: I1213 14:35:28.667813 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dc7gd\" (UniqueName: \"kubernetes.io/projected/8670665d-ce5b-4596-9ea5-5b35a5fac30b-kube-api-access-dc7gd\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.667897 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8670665d-ce5b-4596-9ea5-5b35a5fac30b-clustermesh-secrets\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.667921 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-etc-cni-netd\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.667943 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-host-proc-sys-net\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.667969 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-config-path\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.667991 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-cgroup\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.668016 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-xtables-lock\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.668038 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cni-path\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.668061 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-lib-modules\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.668080 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-run\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.668105 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-ipsec-secrets\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.668130 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8670665d-ce5b-4596-9ea5-5b35a5fac30b-hubble-tls\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.668155 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-hostproc\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.668177 2538 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-host-proc-sys-kernel\") pod \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\" (UID: \"8670665d-ce5b-4596-9ea5-5b35a5fac30b\") " Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.668256 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.668500 kubelet[2538]: I1213 14:35:28.668318 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.670100 kubelet[2538]: I1213 14:35:28.670065 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.675692 systemd[1]: var-lib-kubelet-pods-8670665d\x2dce5b\x2d4596\x2d9ea5\x2d5b35a5fac30b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddc7gd.mount: Deactivated successfully. Dec 13 14:35:28.677182 kubelet[2538]: I1213 14:35:28.677140 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cni-path" (OuterVolumeSpecName: "cni-path") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.677298 kubelet[2538]: I1213 14:35:28.677207 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.677298 kubelet[2538]: I1213 14:35:28.677229 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.681717 systemd[1]: var-lib-kubelet-pods-8670665d\x2dce5b\x2d4596\x2d9ea5\x2d5b35a5fac30b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 14:35:28.684083 kubelet[2538]: I1213 14:35:28.684035 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8670665d-ce5b-4596-9ea5-5b35a5fac30b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:35:28.684383 kubelet[2538]: I1213 14:35:28.684361 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.684513 kubelet[2538]: I1213 14:35:28.684498 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.686527 kubelet[2538]: I1213 14:35:28.686495 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 14:35:28.687858 kubelet[2538]: I1213 14:35:28.687831 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 14:35:28.688174 kubelet[2538]: I1213 14:35:28.687873 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.688174 kubelet[2538]: I1213 14:35:28.687944 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8670665d-ce5b-4596-9ea5-5b35a5fac30b-kube-api-access-dc7gd" (OuterVolumeSpecName: "kube-api-access-dc7gd") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "kube-api-access-dc7gd". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:35:28.688174 kubelet[2538]: I1213 14:35:28.687978 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-hostproc" (OuterVolumeSpecName: "hostproc") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 14:35:28.691472 kubelet[2538]: I1213 14:35:28.691390 2538 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8670665d-ce5b-4596-9ea5-5b35a5fac30b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8670665d-ce5b-4596-9ea5-5b35a5fac30b" (UID: "8670665d-ce5b-4596-9ea5-5b35a5fac30b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 14:35:28.768923 kubelet[2538]: I1213 14:35:28.768880 2538 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-bpf-maps\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.768923 kubelet[2538]: I1213 14:35:28.768919 2538 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dc7gd\" (UniqueName: \"kubernetes.io/projected/8670665d-ce5b-4596-9ea5-5b35a5fac30b-kube-api-access-dc7gd\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.769256 kubelet[2538]: I1213 14:35:28.768938 2538 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-etc-cni-netd\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.769256 kubelet[2538]: I1213 14:35:28.768950 2538 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-host-proc-sys-net\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.769256 kubelet[2538]: I1213 14:35:28.768965 2538 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-config-path\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.769256 kubelet[2538]: I1213 14:35:28.768976 2538 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8670665d-ce5b-4596-9ea5-5b35a5fac30b-clustermesh-secrets\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.769256 kubelet[2538]: I1213 14:35:28.768987 2538 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-cgroup\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.769256 kubelet[2538]: I1213 14:35:28.768997 2538 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-xtables-lock\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.769256 kubelet[2538]: I1213 14:35:28.769006 2538 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cni-path\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.769256 kubelet[2538]: I1213 14:35:28.769015 2538 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-lib-modules\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.769256 kubelet[2538]: I1213 14:35:28.769028 2538 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-run\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.769256 kubelet[2538]: I1213 14:35:28.769040 2538 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8670665d-ce5b-4596-9ea5-5b35a5fac30b-cilium-ipsec-secrets\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.769256 kubelet[2538]: I1213 14:35:28.769051 2538 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8670665d-ce5b-4596-9ea5-5b35a5fac30b-hubble-tls\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.769256 kubelet[2538]: I1213 14:35:28.769061 2538 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-host-proc-sys-kernel\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:28.769256 kubelet[2538]: I1213 14:35:28.769141 2538 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8670665d-ce5b-4596-9ea5-5b35a5fac30b-hostproc\") on node \"ip-172-31-27-196\" DevicePath \"\"" Dec 13 14:35:29.163606 systemd[1]: var-lib-kubelet-pods-8670665d\x2dce5b\x2d4596\x2d9ea5\x2d5b35a5fac30b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 14:35:29.163874 systemd[1]: var-lib-kubelet-pods-8670665d\x2dce5b\x2d4596\x2d9ea5\x2d5b35a5fac30b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 14:35:29.523238 kubelet[2538]: I1213 14:35:29.523089 2538 scope.go:117] "RemoveContainer" containerID="be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e" Dec 13 14:35:29.525462 env[1719]: time="2024-12-13T14:35:29.525099493Z" level=info msg="RemoveContainer for \"be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e\"" Dec 13 14:35:29.530533 env[1719]: time="2024-12-13T14:35:29.530393209Z" level=info msg="RemoveContainer for \"be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e\" returns successfully" Dec 13 14:35:29.535473 systemd[1]: Removed slice kubepods-burstable-pod8670665d_ce5b_4596_9ea5_5b35a5fac30b.slice. Dec 13 14:35:29.598137 kubelet[2538]: E1213 14:35:29.598106 2538 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8670665d-ce5b-4596-9ea5-5b35a5fac30b" containerName="mount-cgroup" Dec 13 14:35:29.598622 kubelet[2538]: I1213 14:35:29.598600 2538 memory_manager.go:354] "RemoveStaleState removing state" podUID="8670665d-ce5b-4596-9ea5-5b35a5fac30b" containerName="mount-cgroup" Dec 13 14:35:29.612321 systemd[1]: Created slice kubepods-burstable-podda8a4849_d9bd_4010_8a6c_433078d247c8.slice. Dec 13 14:35:29.680033 kubelet[2538]: I1213 14:35:29.679807 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/da8a4849-d9bd-4010-8a6c-433078d247c8-bpf-maps\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.680535 kubelet[2538]: I1213 14:35:29.680100 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/da8a4849-d9bd-4010-8a6c-433078d247c8-hostproc\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.680535 kubelet[2538]: I1213 14:35:29.680183 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/da8a4849-d9bd-4010-8a6c-433078d247c8-clustermesh-secrets\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.680535 kubelet[2538]: I1213 14:35:29.680247 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/da8a4849-d9bd-4010-8a6c-433078d247c8-host-proc-sys-kernel\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.680535 kubelet[2538]: I1213 14:35:29.680316 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l98z6\" (UniqueName: \"kubernetes.io/projected/da8a4849-d9bd-4010-8a6c-433078d247c8-kube-api-access-l98z6\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.680535 kubelet[2538]: I1213 14:35:29.680345 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/da8a4849-d9bd-4010-8a6c-433078d247c8-etc-cni-netd\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.680535 kubelet[2538]: I1213 14:35:29.680400 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/da8a4849-d9bd-4010-8a6c-433078d247c8-cni-path\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.680535 kubelet[2538]: I1213 14:35:29.680455 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da8a4849-d9bd-4010-8a6c-433078d247c8-cilium-config-path\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.680535 kubelet[2538]: I1213 14:35:29.680480 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/da8a4849-d9bd-4010-8a6c-433078d247c8-cilium-ipsec-secrets\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.680535 kubelet[2538]: I1213 14:35:29.680529 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/da8a4849-d9bd-4010-8a6c-433078d247c8-cilium-run\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.681226 kubelet[2538]: I1213 14:35:29.680552 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da8a4849-d9bd-4010-8a6c-433078d247c8-lib-modules\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.681226 kubelet[2538]: I1213 14:35:29.680606 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/da8a4849-d9bd-4010-8a6c-433078d247c8-host-proc-sys-net\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.681226 kubelet[2538]: I1213 14:35:29.680632 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da8a4849-d9bd-4010-8a6c-433078d247c8-xtables-lock\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.681226 kubelet[2538]: I1213 14:35:29.680703 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/da8a4849-d9bd-4010-8a6c-433078d247c8-cilium-cgroup\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.681226 kubelet[2538]: I1213 14:35:29.680727 2538 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/da8a4849-d9bd-4010-8a6c-433078d247c8-hubble-tls\") pod \"cilium-bxl8l\" (UID: \"da8a4849-d9bd-4010-8a6c-433078d247c8\") " pod="kube-system/cilium-bxl8l" Dec 13 14:35:29.918760 env[1719]: time="2024-12-13T14:35:29.918685046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxl8l,Uid:da8a4849-d9bd-4010-8a6c-433078d247c8,Namespace:kube-system,Attempt:0,}" Dec 13 14:35:29.949571 env[1719]: time="2024-12-13T14:35:29.949491177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:35:29.949892 env[1719]: time="2024-12-13T14:35:29.949837320Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:35:29.949892 env[1719]: time="2024-12-13T14:35:29.949862241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:35:29.950316 env[1719]: time="2024-12-13T14:35:29.950235705Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e952233bb612845830c3c5c0c458473d5dedf647fcfe2c60487c129de6df177 pid=4607 runtime=io.containerd.runc.v2 Dec 13 14:35:29.968576 systemd[1]: Started cri-containerd-3e952233bb612845830c3c5c0c458473d5dedf647fcfe2c60487c129de6df177.scope. Dec 13 14:35:29.978509 kubelet[2538]: E1213 14:35:29.978468 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-q8mr5" podUID="b4126957-fc80-4ca3-9602-c56dca4157ca" Dec 13 14:35:29.982897 kubelet[2538]: I1213 14:35:29.982866 2538 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8670665d-ce5b-4596-9ea5-5b35a5fac30b" path="/var/lib/kubelet/pods/8670665d-ce5b-4596-9ea5-5b35a5fac30b/volumes" Dec 13 14:35:30.007870 env[1719]: time="2024-12-13T14:35:30.007825765Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bxl8l,Uid:da8a4849-d9bd-4010-8a6c-433078d247c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e952233bb612845830c3c5c0c458473d5dedf647fcfe2c60487c129de6df177\"" Dec 13 14:35:30.011333 env[1719]: time="2024-12-13T14:35:30.011246526Z" level=info msg="CreateContainer within sandbox \"3e952233bb612845830c3c5c0c458473d5dedf647fcfe2c60487c129de6df177\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 14:35:30.034218 env[1719]: time="2024-12-13T14:35:30.034177698Z" level=info msg="CreateContainer within sandbox \"3e952233bb612845830c3c5c0c458473d5dedf647fcfe2c60487c129de6df177\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cedfa47928e618b79d01f6f9fd433990842261136fc9db73ad3837b598046415\"" Dec 13 14:35:30.036702 env[1719]: time="2024-12-13T14:35:30.036658082Z" level=info msg="StartContainer for \"cedfa47928e618b79d01f6f9fd433990842261136fc9db73ad3837b598046415\"" Dec 13 14:35:30.069617 systemd[1]: Started cri-containerd-cedfa47928e618b79d01f6f9fd433990842261136fc9db73ad3837b598046415.scope. Dec 13 14:35:30.122897 env[1719]: time="2024-12-13T14:35:30.122838096Z" level=info msg="StartContainer for \"cedfa47928e618b79d01f6f9fd433990842261136fc9db73ad3837b598046415\" returns successfully" Dec 13 14:35:30.153033 systemd[1]: cri-containerd-cedfa47928e618b79d01f6f9fd433990842261136fc9db73ad3837b598046415.scope: Deactivated successfully. Dec 13 14:35:30.155069 kubelet[2538]: E1213 14:35:30.155026 2538 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 14:35:30.186500 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cedfa47928e618b79d01f6f9fd433990842261136fc9db73ad3837b598046415-rootfs.mount: Deactivated successfully. Dec 13 14:35:30.214626 env[1719]: time="2024-12-13T14:35:30.214572268Z" level=info msg="shim disconnected" id=cedfa47928e618b79d01f6f9fd433990842261136fc9db73ad3837b598046415 Dec 13 14:35:30.214626 env[1719]: time="2024-12-13T14:35:30.214618433Z" level=warning msg="cleaning up after shim disconnected" id=cedfa47928e618b79d01f6f9fd433990842261136fc9db73ad3837b598046415 namespace=k8s.io Dec 13 14:35:30.214626 env[1719]: time="2024-12-13T14:35:30.214631058Z" level=info msg="cleaning up dead shim" Dec 13 14:35:30.224703 env[1719]: time="2024-12-13T14:35:30.224595818Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4692 runtime=io.containerd.runc.v2\n" Dec 13 14:35:30.531205 env[1719]: time="2024-12-13T14:35:30.531082426Z" level=info msg="CreateContainer within sandbox \"3e952233bb612845830c3c5c0c458473d5dedf647fcfe2c60487c129de6df177\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 14:35:30.556223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2095626380.mount: Deactivated successfully. Dec 13 14:35:30.565152 env[1719]: time="2024-12-13T14:35:30.561524079Z" level=info msg="CreateContainer within sandbox \"3e952233bb612845830c3c5c0c458473d5dedf647fcfe2c60487c129de6df177\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"87bf4a6370a2902d3ad164b838f68a8c95a90a698513078680e5de8d961a427f\"" Dec 13 14:35:30.569301 env[1719]: time="2024-12-13T14:35:30.566813173Z" level=info msg="StartContainer for \"87bf4a6370a2902d3ad164b838f68a8c95a90a698513078680e5de8d961a427f\"" Dec 13 14:35:30.592342 systemd[1]: Started cri-containerd-87bf4a6370a2902d3ad164b838f68a8c95a90a698513078680e5de8d961a427f.scope. Dec 13 14:35:30.628707 env[1719]: time="2024-12-13T14:35:30.628666158Z" level=info msg="StartContainer for \"87bf4a6370a2902d3ad164b838f68a8c95a90a698513078680e5de8d961a427f\" returns successfully" Dec 13 14:35:30.642546 systemd[1]: cri-containerd-87bf4a6370a2902d3ad164b838f68a8c95a90a698513078680e5de8d961a427f.scope: Deactivated successfully. Dec 13 14:35:30.680345 env[1719]: time="2024-12-13T14:35:30.680300729Z" level=info msg="shim disconnected" id=87bf4a6370a2902d3ad164b838f68a8c95a90a698513078680e5de8d961a427f Dec 13 14:35:30.680658 env[1719]: time="2024-12-13T14:35:30.680625414Z" level=warning msg="cleaning up after shim disconnected" id=87bf4a6370a2902d3ad164b838f68a8c95a90a698513078680e5de8d961a427f namespace=k8s.io Dec 13 14:35:30.680658 env[1719]: time="2024-12-13T14:35:30.680645926Z" level=info msg="cleaning up dead shim" Dec 13 14:35:30.690688 env[1719]: time="2024-12-13T14:35:30.690467051Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4758 runtime=io.containerd.runc.v2\n" Dec 13 14:35:30.721247 kubelet[2538]: W1213 14:35:30.720258 2538 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8670665d_ce5b_4596_9ea5_5b35a5fac30b.slice/cri-containerd-be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e.scope WatchSource:0}: container "be6fd7043e2cfccd84e62e09bd8454a2ab4a9acd51093fe2724f0f841657c40e" in namespace "k8s.io": not found Dec 13 14:35:31.542691 env[1719]: time="2024-12-13T14:35:31.542098547Z" level=info msg="CreateContainer within sandbox \"3e952233bb612845830c3c5c0c458473d5dedf647fcfe2c60487c129de6df177\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 14:35:31.583022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount562752058.mount: Deactivated successfully. Dec 13 14:35:31.612105 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776517893.mount: Deactivated successfully. Dec 13 14:35:31.618847 env[1719]: time="2024-12-13T14:35:31.618749066Z" level=info msg="CreateContainer within sandbox \"3e952233bb612845830c3c5c0c458473d5dedf647fcfe2c60487c129de6df177\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8f46bcdd3db62b3d46c8bbfeba9e72adcd3dcfb408ecb408612b538f6083070e\"" Dec 13 14:35:31.620594 env[1719]: time="2024-12-13T14:35:31.620555931Z" level=info msg="StartContainer for \"8f46bcdd3db62b3d46c8bbfeba9e72adcd3dcfb408ecb408612b538f6083070e\"" Dec 13 14:35:31.651239 systemd[1]: Started cri-containerd-8f46bcdd3db62b3d46c8bbfeba9e72adcd3dcfb408ecb408612b538f6083070e.scope. Dec 13 14:35:31.714517 env[1719]: time="2024-12-13T14:35:31.714460670Z" level=info msg="StartContainer for \"8f46bcdd3db62b3d46c8bbfeba9e72adcd3dcfb408ecb408612b538f6083070e\" returns successfully" Dec 13 14:35:31.800709 systemd[1]: cri-containerd-8f46bcdd3db62b3d46c8bbfeba9e72adcd3dcfb408ecb408612b538f6083070e.scope: Deactivated successfully. Dec 13 14:35:31.843410 env[1719]: time="2024-12-13T14:35:31.843247101Z" level=info msg="shim disconnected" id=8f46bcdd3db62b3d46c8bbfeba9e72adcd3dcfb408ecb408612b538f6083070e Dec 13 14:35:31.843410 env[1719]: time="2024-12-13T14:35:31.843411258Z" level=warning msg="cleaning up after shim disconnected" id=8f46bcdd3db62b3d46c8bbfeba9e72adcd3dcfb408ecb408612b538f6083070e namespace=k8s.io Dec 13 14:35:31.843785 env[1719]: time="2024-12-13T14:35:31.843424323Z" level=info msg="cleaning up dead shim" Dec 13 14:35:31.852566 env[1719]: time="2024-12-13T14:35:31.852508963Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4818 runtime=io.containerd.runc.v2\n" Dec 13 14:35:31.976188 kubelet[2538]: E1213 14:35:31.975773 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-q8mr5" podUID="b4126957-fc80-4ca3-9602-c56dca4157ca" Dec 13 14:35:32.142769 kubelet[2538]: I1213 14:35:32.142711 2538 setters.go:600] "Node became not ready" node="ip-172-31-27-196" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T14:35:32Z","lastTransitionTime":"2024-12-13T14:35:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 14:35:32.597439 env[1719]: time="2024-12-13T14:35:32.596336531Z" level=info msg="CreateContainer within sandbox \"3e952233bb612845830c3c5c0c458473d5dedf647fcfe2c60487c129de6df177\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 14:35:32.638458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount945974412.mount: Deactivated successfully. Dec 13 14:35:32.655522 env[1719]: time="2024-12-13T14:35:32.655466196Z" level=info msg="CreateContainer within sandbox \"3e952233bb612845830c3c5c0c458473d5dedf647fcfe2c60487c129de6df177\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5c632d82eec0e45cc9eef1467a2946541d1808bf38af0597a04696e0ae57761b\"" Dec 13 14:35:32.656508 env[1719]: time="2024-12-13T14:35:32.656470207Z" level=info msg="StartContainer for \"5c632d82eec0e45cc9eef1467a2946541d1808bf38af0597a04696e0ae57761b\"" Dec 13 14:35:32.687643 systemd[1]: Started cri-containerd-5c632d82eec0e45cc9eef1467a2946541d1808bf38af0597a04696e0ae57761b.scope. Dec 13 14:35:32.730966 systemd[1]: cri-containerd-5c632d82eec0e45cc9eef1467a2946541d1808bf38af0597a04696e0ae57761b.scope: Deactivated successfully. Dec 13 14:35:32.734693 env[1719]: time="2024-12-13T14:35:32.734639534Z" level=info msg="StartContainer for \"5c632d82eec0e45cc9eef1467a2946541d1808bf38af0597a04696e0ae57761b\" returns successfully" Dec 13 14:35:32.767986 env[1719]: time="2024-12-13T14:35:32.767932239Z" level=info msg="shim disconnected" id=5c632d82eec0e45cc9eef1467a2946541d1808bf38af0597a04696e0ae57761b Dec 13 14:35:32.767986 env[1719]: time="2024-12-13T14:35:32.767986400Z" level=warning msg="cleaning up after shim disconnected" id=5c632d82eec0e45cc9eef1467a2946541d1808bf38af0597a04696e0ae57761b namespace=k8s.io Dec 13 14:35:32.767986 env[1719]: time="2024-12-13T14:35:32.768068526Z" level=info msg="cleaning up dead shim" Dec 13 14:35:32.778617 env[1719]: time="2024-12-13T14:35:32.778561485Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:35:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4877 runtime=io.containerd.runc.v2\n" Dec 13 14:35:33.163693 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5c632d82eec0e45cc9eef1467a2946541d1808bf38af0597a04696e0ae57761b-rootfs.mount: Deactivated successfully. Dec 13 14:35:33.577091 env[1719]: time="2024-12-13T14:35:33.576749892Z" level=info msg="CreateContainer within sandbox \"3e952233bb612845830c3c5c0c458473d5dedf647fcfe2c60487c129de6df177\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 14:35:33.605158 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1933767121.mount: Deactivated successfully. Dec 13 14:35:33.633868 env[1719]: time="2024-12-13T14:35:33.630489434Z" level=info msg="CreateContainer within sandbox \"3e952233bb612845830c3c5c0c458473d5dedf647fcfe2c60487c129de6df177\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7e8e6d90c18603ed501b680d6906972fbfba58d09aab1fc6ccf326854d9129e4\"" Dec 13 14:35:33.640012 env[1719]: time="2024-12-13T14:35:33.639972606Z" level=info msg="StartContainer for \"7e8e6d90c18603ed501b680d6906972fbfba58d09aab1fc6ccf326854d9129e4\"" Dec 13 14:35:33.685908 systemd[1]: Started cri-containerd-7e8e6d90c18603ed501b680d6906972fbfba58d09aab1fc6ccf326854d9129e4.scope. Dec 13 14:35:33.728501 env[1719]: time="2024-12-13T14:35:33.727454007Z" level=info msg="StartContainer for \"7e8e6d90c18603ed501b680d6906972fbfba58d09aab1fc6ccf326854d9129e4\" returns successfully" Dec 13 14:35:33.847188 kubelet[2538]: W1213 14:35:33.847071 2538 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda8a4849_d9bd_4010_8a6c_433078d247c8.slice/cri-containerd-cedfa47928e618b79d01f6f9fd433990842261136fc9db73ad3837b598046415.scope WatchSource:0}: task cedfa47928e618b79d01f6f9fd433990842261136fc9db73ad3837b598046415 not found: not found Dec 13 14:35:33.978435 kubelet[2538]: E1213 14:35:33.976521 2538 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-6f6b679f8f-q8mr5" podUID="b4126957-fc80-4ca3-9602-c56dca4157ca" Dec 13 14:35:34.628513 kubelet[2538]: I1213 14:35:34.628440 2538 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bxl8l" podStartSLOduration=5.628417174 podStartE2EDuration="5.628417174s" podCreationTimestamp="2024-12-13 14:35:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:35:34.627287831 +0000 UTC m=+134.961300756" watchObservedRunningTime="2024-12-13 14:35:34.628417174 +0000 UTC m=+134.962430096" Dec 13 14:35:34.914316 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 14:35:36.576900 systemd[1]: run-containerd-runc-k8s.io-7e8e6d90c18603ed501b680d6906972fbfba58d09aab1fc6ccf326854d9129e4-runc.jrA4Az.mount: Deactivated successfully. Dec 13 14:35:36.957973 kubelet[2538]: W1213 14:35:36.957900 2538 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda8a4849_d9bd_4010_8a6c_433078d247c8.slice/cri-containerd-87bf4a6370a2902d3ad164b838f68a8c95a90a698513078680e5de8d961a427f.scope WatchSource:0}: task 87bf4a6370a2902d3ad164b838f68a8c95a90a698513078680e5de8d961a427f not found: not found Dec 13 14:35:38.682761 systemd-networkd[1441]: lxc_health: Link UP Dec 13 14:35:38.688299 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 14:35:38.688443 systemd-networkd[1441]: lxc_health: Gained carrier Dec 13 14:35:38.696365 (udev-worker)[5446]: Network interface NamePolicy= disabled on kernel command line. Dec 13 14:35:39.912397 systemd-networkd[1441]: lxc_health: Gained IPv6LL Dec 13 14:35:40.069668 kubelet[2538]: W1213 14:35:40.069610 2538 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda8a4849_d9bd_4010_8a6c_433078d247c8.slice/cri-containerd-8f46bcdd3db62b3d46c8bbfeba9e72adcd3dcfb408ecb408612b538f6083070e.scope WatchSource:0}: task 8f46bcdd3db62b3d46c8bbfeba9e72adcd3dcfb408ecb408612b538f6083070e not found: not found Dec 13 14:35:41.325834 systemd[1]: run-containerd-runc-k8s.io-7e8e6d90c18603ed501b680d6906972fbfba58d09aab1fc6ccf326854d9129e4-runc.xQw1Uz.mount: Deactivated successfully. Dec 13 14:35:43.179658 kubelet[2538]: W1213 14:35:43.179575 2538 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda8a4849_d9bd_4010_8a6c_433078d247c8.slice/cri-containerd-5c632d82eec0e45cc9eef1467a2946541d1808bf38af0597a04696e0ae57761b.scope WatchSource:0}: task 5c632d82eec0e45cc9eef1467a2946541d1808bf38af0597a04696e0ae57761b not found: not found Dec 13 14:35:43.561455 systemd[1]: run-containerd-runc-k8s.io-7e8e6d90c18603ed501b680d6906972fbfba58d09aab1fc6ccf326854d9129e4-runc.HihHZX.mount: Deactivated successfully. Dec 13 14:35:46.042865 sshd[4549]: pam_unix(sshd:session): session closed for user core Dec 13 14:35:46.046165 systemd[1]: sshd@26-172.31.27.196:22-139.178.89.65:47346.service: Deactivated successfully. Dec 13 14:35:46.047419 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 14:35:46.048178 systemd-logind[1705]: Session 27 logged out. Waiting for processes to exit. Dec 13 14:35:46.049158 systemd-logind[1705]: Removed session 27. Dec 13 14:36:10.957655 systemd[1]: cri-containerd-bf0c9cf888ea99a16ab5858c1e69cb9a329464555242e50dc85ffd64102e2760.scope: Deactivated successfully. Dec 13 14:36:10.958018 systemd[1]: cri-containerd-bf0c9cf888ea99a16ab5858c1e69cb9a329464555242e50dc85ffd64102e2760.scope: Consumed 2.905s CPU time. Dec 13 14:36:11.000102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bf0c9cf888ea99a16ab5858c1e69cb9a329464555242e50dc85ffd64102e2760-rootfs.mount: Deactivated successfully. Dec 13 14:36:11.012790 env[1719]: time="2024-12-13T14:36:11.012731799Z" level=info msg="shim disconnected" id=bf0c9cf888ea99a16ab5858c1e69cb9a329464555242e50dc85ffd64102e2760 Dec 13 14:36:11.013374 env[1719]: time="2024-12-13T14:36:11.012803032Z" level=warning msg="cleaning up after shim disconnected" id=bf0c9cf888ea99a16ab5858c1e69cb9a329464555242e50dc85ffd64102e2760 namespace=k8s.io Dec 13 14:36:11.013374 env[1719]: time="2024-12-13T14:36:11.012820264Z" level=info msg="cleaning up dead shim" Dec 13 14:36:11.025097 env[1719]: time="2024-12-13T14:36:11.025050052Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5574 runtime=io.containerd.runc.v2\n" Dec 13 14:36:11.667645 kubelet[2538]: I1213 14:36:11.667610 2538 scope.go:117] "RemoveContainer" containerID="bf0c9cf888ea99a16ab5858c1e69cb9a329464555242e50dc85ffd64102e2760" Dec 13 14:36:11.673822 env[1719]: time="2024-12-13T14:36:11.671382341Z" level=info msg="CreateContainer within sandbox \"79fc9174b317cf145667f607d51d86d04014f642974d44bf6f245621c069757c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Dec 13 14:36:11.714390 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount530781853.mount: Deactivated successfully. Dec 13 14:36:11.733236 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2534777732.mount: Deactivated successfully. Dec 13 14:36:11.740912 env[1719]: time="2024-12-13T14:36:11.740859180Z" level=info msg="CreateContainer within sandbox \"79fc9174b317cf145667f607d51d86d04014f642974d44bf6f245621c069757c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"37f675016c9a55c11614f72dad50063d68b703a3d1831e1f0fc3995645d45c21\"" Dec 13 14:36:11.741773 env[1719]: time="2024-12-13T14:36:11.741732576Z" level=info msg="StartContainer for \"37f675016c9a55c11614f72dad50063d68b703a3d1831e1f0fc3995645d45c21\"" Dec 13 14:36:11.774041 systemd[1]: Started cri-containerd-37f675016c9a55c11614f72dad50063d68b703a3d1831e1f0fc3995645d45c21.scope. Dec 13 14:36:11.844861 env[1719]: time="2024-12-13T14:36:11.844808155Z" level=info msg="StartContainer for \"37f675016c9a55c11614f72dad50063d68b703a3d1831e1f0fc3995645d45c21\" returns successfully" Dec 13 14:36:12.958169 kubelet[2538]: E1213 14:36:12.958113 2538 controller.go:195] "Failed to update lease" err="Put \"https://172.31.27.196:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-27-196?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Dec 13 14:36:16.394303 systemd[1]: cri-containerd-40cd5f8199962dda183cba8b82f64bc88fe53a5776a5a9ff7ba05af877628380.scope: Deactivated successfully. Dec 13 14:36:16.394634 systemd[1]: cri-containerd-40cd5f8199962dda183cba8b82f64bc88fe53a5776a5a9ff7ba05af877628380.scope: Consumed 1.503s CPU time. Dec 13 14:36:16.425075 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40cd5f8199962dda183cba8b82f64bc88fe53a5776a5a9ff7ba05af877628380-rootfs.mount: Deactivated successfully. Dec 13 14:36:16.452694 env[1719]: time="2024-12-13T14:36:16.452640719Z" level=info msg="shim disconnected" id=40cd5f8199962dda183cba8b82f64bc88fe53a5776a5a9ff7ba05af877628380 Dec 13 14:36:16.452694 env[1719]: time="2024-12-13T14:36:16.452692254Z" level=warning msg="cleaning up after shim disconnected" id=40cd5f8199962dda183cba8b82f64bc88fe53a5776a5a9ff7ba05af877628380 namespace=k8s.io Dec 13 14:36:16.453352 env[1719]: time="2024-12-13T14:36:16.452704070Z" level=info msg="cleaning up dead shim" Dec 13 14:36:16.468087 env[1719]: time="2024-12-13T14:36:16.468041720Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:36:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5637 runtime=io.containerd.runc.v2\n" Dec 13 14:36:16.699014 kubelet[2538]: I1213 14:36:16.697685 2538 scope.go:117] "RemoveContainer" containerID="40cd5f8199962dda183cba8b82f64bc88fe53a5776a5a9ff7ba05af877628380" Dec 13 14:36:16.701725 env[1719]: time="2024-12-13T14:36:16.701682463Z" level=info msg="CreateContainer within sandbox \"0164cd4a03b9162cd409359ff3eac3d08fab9c87f0b73579f4104fa110d5d158\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Dec 13 14:36:16.741472 env[1719]: time="2024-12-13T14:36:16.741393416Z" level=info msg="CreateContainer within sandbox \"0164cd4a03b9162cd409359ff3eac3d08fab9c87f0b73579f4104fa110d5d158\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"84f1fbd47c8f0215435e7dc4a98dad661f0b70ff6199abf775c9bec74ee6cec6\"" Dec 13 14:36:16.743856 env[1719]: time="2024-12-13T14:36:16.743811942Z" level=info msg="StartContainer for \"84f1fbd47c8f0215435e7dc4a98dad661f0b70ff6199abf775c9bec74ee6cec6\"" Dec 13 14:36:16.784464 systemd[1]: Started cri-containerd-84f1fbd47c8f0215435e7dc4a98dad661f0b70ff6199abf775c9bec74ee6cec6.scope. Dec 13 14:36:16.860527 env[1719]: time="2024-12-13T14:36:16.860466392Z" level=info msg="StartContainer for \"84f1fbd47c8f0215435e7dc4a98dad661f0b70ff6199abf775c9bec74ee6cec6\" returns successfully" Dec 13 14:36:17.424461 systemd[1]: run-containerd-runc-k8s.io-84f1fbd47c8f0215435e7dc4a98dad661f0b70ff6199abf775c9bec74ee6cec6-runc.MHzD5m.mount: Deactivated successfully. Dec 13 14:36:20.001193 env[1719]: time="2024-12-13T14:36:20.000817232Z" level=info msg="StopPodSandbox for \"8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f\"" Dec 13 14:36:20.001193 env[1719]: time="2024-12-13T14:36:20.000959883Z" level=info msg="TearDown network for sandbox \"8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f\" successfully" Dec 13 14:36:20.001193 env[1719]: time="2024-12-13T14:36:20.001017138Z" level=info msg="StopPodSandbox for \"8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f\" returns successfully" Dec 13 14:36:20.003721 env[1719]: time="2024-12-13T14:36:20.002664925Z" level=info msg="RemovePodSandbox for \"8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f\"" Dec 13 14:36:20.003721 env[1719]: time="2024-12-13T14:36:20.002706343Z" level=info msg="Forcibly stopping sandbox \"8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f\"" Dec 13 14:36:20.003721 env[1719]: time="2024-12-13T14:36:20.003051074Z" level=info msg="TearDown network for sandbox \"8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f\" successfully" Dec 13 14:36:20.016047 env[1719]: time="2024-12-13T14:36:20.015990159Z" level=info msg="RemovePodSandbox \"8fe60222fc55b8068d0eade937c1b61a3d3de30d34123c8cf5b1106dab54876f\" returns successfully" Dec 13 14:36:20.017876 env[1719]: time="2024-12-13T14:36:20.017713234Z" level=info msg="StopPodSandbox for \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\"" Dec 13 14:36:20.018257 env[1719]: time="2024-12-13T14:36:20.018189186Z" level=info msg="TearDown network for sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" successfully" Dec 13 14:36:20.018452 env[1719]: time="2024-12-13T14:36:20.018430154Z" level=info msg="StopPodSandbox for \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" returns successfully" Dec 13 14:36:20.019398 env[1719]: time="2024-12-13T14:36:20.019373858Z" level=info msg="RemovePodSandbox for \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\"" Dec 13 14:36:20.019633 env[1719]: time="2024-12-13T14:36:20.019589050Z" level=info msg="Forcibly stopping sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\"" Dec 13 14:36:20.019980 env[1719]: time="2024-12-13T14:36:20.019955134Z" level=info msg="TearDown network for sandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" successfully" Dec 13 14:36:20.029112 env[1719]: time="2024-12-13T14:36:20.029053112Z" level=info msg="RemovePodSandbox \"29e7be9a0ae368f29f1fb6af05cab7e6cfa69021b5718fdf460fa3e8943d8a34\" returns successfully" Dec 13 14:36:20.030101 env[1719]: time="2024-12-13T14:36:20.029989394Z" level=info msg="StopPodSandbox for \"879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41\"" Dec 13 14:36:20.030462 env[1719]: time="2024-12-13T14:36:20.030409924Z" level=info msg="TearDown network for sandbox \"879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41\" successfully" Dec 13 14:36:20.030655 env[1719]: time="2024-12-13T14:36:20.030635056Z" level=info msg="StopPodSandbox for \"879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41\" returns successfully" Dec 13 14:36:20.031080 env[1719]: time="2024-12-13T14:36:20.031057766Z" level=info msg="RemovePodSandbox for \"879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41\"" Dec 13 14:36:20.031223 env[1719]: time="2024-12-13T14:36:20.031184274Z" level=info msg="Forcibly stopping sandbox \"879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41\"" Dec 13 14:36:20.031450 env[1719]: time="2024-12-13T14:36:20.031429127Z" level=info msg="TearDown network for sandbox \"879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41\" successfully" Dec 13 14:36:20.038513 env[1719]: time="2024-12-13T14:36:20.038466046Z" level=info msg="RemovePodSandbox \"879f79371b23cb2c68ff812c5b82d7ec58c276c2686e3be0d234243df8eb8f41\" returns successfully"