Jul 2 00:45:24.936457 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jul 2 00:45:24.936493 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Jul 1 23:37:37 -00 2024 Jul 2 00:45:24.936516 kernel: efi: EFI v2.70 by EDK II Jul 2 00:45:24.936531 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x7173cf98 Jul 2 00:45:24.936544 kernel: ACPI: Early table checksum verification disabled Jul 2 00:45:24.936558 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jul 2 00:45:24.936573 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jul 2 00:45:24.936587 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jul 2 00:45:24.936601 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jul 2 00:45:24.936614 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jul 2 00:45:24.936632 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jul 2 00:45:24.936646 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jul 2 00:45:24.936660 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jul 2 00:45:24.936674 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jul 2 00:45:24.936691 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jul 2 00:45:24.936710 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jul 2 00:45:24.936724 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jul 2 00:45:24.936739 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jul 2 00:45:24.936753 kernel: printk: bootconsole [uart0] enabled Jul 2 00:45:24.936768 kernel: NUMA: Failed to initialise from firmware Jul 2 00:45:24.936783 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 00:45:24.936798 kernel: NUMA: NODE_DATA [mem 0x4b5843900-0x4b5848fff] Jul 2 00:45:24.936812 kernel: Zone ranges: Jul 2 00:45:24.936827 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jul 2 00:45:24.936841 kernel: DMA32 empty Jul 2 00:45:24.936855 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jul 2 00:45:24.936873 kernel: Movable zone start for each node Jul 2 00:45:24.936887 kernel: Early memory node ranges Jul 2 00:45:24.936902 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jul 2 00:45:24.936916 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jul 2 00:45:24.936930 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jul 2 00:45:24.936945 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jul 2 00:45:24.936959 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jul 2 00:45:24.936973 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jul 2 00:45:24.936987 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jul 2 00:45:24.937002 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jul 2 00:45:24.937016 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jul 2 00:45:24.937031 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jul 2 00:45:24.937049 kernel: psci: probing for conduit method from ACPI. Jul 2 00:45:24.937064 kernel: psci: PSCIv1.0 detected in firmware. Jul 2 00:45:24.937084 kernel: psci: Using standard PSCI v0.2 function IDs Jul 2 00:45:24.937100 kernel: psci: Trusted OS migration not required Jul 2 00:45:24.937115 kernel: psci: SMC Calling Convention v1.1 Jul 2 00:45:24.937134 kernel: ACPI: SRAT not present Jul 2 00:45:24.937150 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Jul 2 00:45:24.937183 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Jul 2 00:45:24.937205 kernel: pcpu-alloc: [0] 0 [0] 1 Jul 2 00:45:24.937220 kernel: Detected PIPT I-cache on CPU0 Jul 2 00:45:24.937236 kernel: CPU features: detected: GIC system register CPU interface Jul 2 00:45:24.937251 kernel: CPU features: detected: Spectre-v2 Jul 2 00:45:24.937266 kernel: CPU features: detected: Spectre-v3a Jul 2 00:45:24.937281 kernel: CPU features: detected: Spectre-BHB Jul 2 00:45:24.937296 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 2 00:45:24.937311 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 2 00:45:24.937331 kernel: CPU features: detected: ARM erratum 1742098 Jul 2 00:45:24.937347 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jul 2 00:45:24.937362 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jul 2 00:45:24.937377 kernel: Policy zone: Normal Jul 2 00:45:24.937395 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 00:45:24.937412 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 00:45:24.937427 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 00:45:24.937443 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 2 00:45:24.937458 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 00:45:24.937473 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jul 2 00:45:24.937493 kernel: Memory: 3824588K/4030464K available (9792K kernel code, 2092K rwdata, 7572K rodata, 36352K init, 777K bss, 205876K reserved, 0K cma-reserved) Jul 2 00:45:24.937509 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 00:45:24.937524 kernel: trace event string verifier disabled Jul 2 00:45:24.937539 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 00:45:24.937555 kernel: rcu: RCU event tracing is enabled. Jul 2 00:45:24.937571 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 00:45:24.937586 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 00:45:24.937602 kernel: Tracing variant of Tasks RCU enabled. Jul 2 00:45:24.937617 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 00:45:24.937632 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 00:45:24.937647 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 2 00:45:24.937663 kernel: GICv3: 96 SPIs implemented Jul 2 00:45:24.937682 kernel: GICv3: 0 Extended SPIs implemented Jul 2 00:45:24.937697 kernel: GICv3: Distributor has no Range Selector support Jul 2 00:45:24.937712 kernel: Root IRQ handler: gic_handle_irq Jul 2 00:45:24.937727 kernel: GICv3: 16 PPIs implemented Jul 2 00:45:24.937742 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jul 2 00:45:24.937757 kernel: ACPI: SRAT not present Jul 2 00:45:24.937771 kernel: ITS [mem 0x10080000-0x1009ffff] Jul 2 00:45:24.937787 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Jul 2 00:45:24.937802 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Jul 2 00:45:24.937817 kernel: GICv3: using LPI property table @0x00000004000c0000 Jul 2 00:45:24.937832 kernel: ITS: Using hypervisor restricted LPI range [128] Jul 2 00:45:24.937851 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Jul 2 00:45:24.937866 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jul 2 00:45:24.937881 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jul 2 00:45:24.937897 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jul 2 00:45:24.937912 kernel: Console: colour dummy device 80x25 Jul 2 00:45:24.937927 kernel: printk: console [tty1] enabled Jul 2 00:45:24.937943 kernel: ACPI: Core revision 20210730 Jul 2 00:45:24.937958 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jul 2 00:45:24.937974 kernel: pid_max: default: 32768 minimum: 301 Jul 2 00:45:24.937989 kernel: LSM: Security Framework initializing Jul 2 00:45:24.938008 kernel: SELinux: Initializing. Jul 2 00:45:24.938024 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:45:24.938039 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 2 00:45:24.938055 kernel: rcu: Hierarchical SRCU implementation. Jul 2 00:45:24.938070 kernel: Platform MSI: ITS@0x10080000 domain created Jul 2 00:45:24.938085 kernel: PCI/MSI: ITS@0x10080000 domain created Jul 2 00:45:24.938101 kernel: Remapping and enabling EFI services. Jul 2 00:45:24.938116 kernel: smp: Bringing up secondary CPUs ... Jul 2 00:45:24.938131 kernel: Detected PIPT I-cache on CPU1 Jul 2 00:45:24.938147 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jul 2 00:45:24.938182 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Jul 2 00:45:24.941249 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jul 2 00:45:24.941266 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 00:45:24.941283 kernel: SMP: Total of 2 processors activated. Jul 2 00:45:24.941300 kernel: CPU features: detected: 32-bit EL0 Support Jul 2 00:45:24.941315 kernel: CPU features: detected: 32-bit EL1 Support Jul 2 00:45:24.941331 kernel: CPU features: detected: CRC32 instructions Jul 2 00:45:24.941347 kernel: CPU: All CPU(s) started at EL1 Jul 2 00:45:24.941362 kernel: alternatives: patching kernel code Jul 2 00:45:24.941386 kernel: devtmpfs: initialized Jul 2 00:45:24.941402 kernel: KASLR disabled due to lack of seed Jul 2 00:45:24.941428 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 00:45:24.941449 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 00:45:24.941465 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 00:45:24.941481 kernel: SMBIOS 3.0.0 present. Jul 2 00:45:24.941496 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jul 2 00:45:24.941513 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 00:45:24.941529 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 2 00:45:24.941545 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 2 00:45:24.941562 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 2 00:45:24.941582 kernel: audit: initializing netlink subsys (disabled) Jul 2 00:45:24.941599 kernel: audit: type=2000 audit(0.248:1): state=initialized audit_enabled=0 res=1 Jul 2 00:45:24.941615 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 00:45:24.941631 kernel: cpuidle: using governor menu Jul 2 00:45:24.941647 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 2 00:45:24.941667 kernel: ASID allocator initialised with 32768 entries Jul 2 00:45:24.941684 kernel: ACPI: bus type PCI registered Jul 2 00:45:24.941700 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 00:45:24.941715 kernel: Serial: AMBA PL011 UART driver Jul 2 00:45:24.941731 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 00:45:24.941748 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Jul 2 00:45:24.941764 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 00:45:24.941780 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Jul 2 00:45:24.941796 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 00:45:24.941816 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 2 00:45:24.941832 kernel: ACPI: Added _OSI(Module Device) Jul 2 00:45:24.941848 kernel: ACPI: Added _OSI(Processor Device) Jul 2 00:45:24.941864 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 00:45:24.941880 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 00:45:24.941897 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 00:45:24.941914 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 00:45:24.941930 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 00:45:24.941946 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 00:45:24.941966 kernel: ACPI: Interpreter enabled Jul 2 00:45:24.941982 kernel: ACPI: Using GIC for interrupt routing Jul 2 00:45:24.941998 kernel: ACPI: MCFG table detected, 1 entries Jul 2 00:45:24.942014 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jul 2 00:45:24.942358 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 2 00:45:24.942566 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 2 00:45:24.942762 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 2 00:45:24.942983 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jul 2 00:45:24.944203 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jul 2 00:45:24.944235 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jul 2 00:45:24.944253 kernel: acpiphp: Slot [1] registered Jul 2 00:45:24.944269 kernel: acpiphp: Slot [2] registered Jul 2 00:45:24.944286 kernel: acpiphp: Slot [3] registered Jul 2 00:45:24.944302 kernel: acpiphp: Slot [4] registered Jul 2 00:45:24.944318 kernel: acpiphp: Slot [5] registered Jul 2 00:45:24.944334 kernel: acpiphp: Slot [6] registered Jul 2 00:45:24.944350 kernel: acpiphp: Slot [7] registered Jul 2 00:45:24.944373 kernel: acpiphp: Slot [8] registered Jul 2 00:45:24.944389 kernel: acpiphp: Slot [9] registered Jul 2 00:45:24.944405 kernel: acpiphp: Slot [10] registered Jul 2 00:45:24.944421 kernel: acpiphp: Slot [11] registered Jul 2 00:45:24.944437 kernel: acpiphp: Slot [12] registered Jul 2 00:45:24.944453 kernel: acpiphp: Slot [13] registered Jul 2 00:45:24.944469 kernel: acpiphp: Slot [14] registered Jul 2 00:45:24.944485 kernel: acpiphp: Slot [15] registered Jul 2 00:45:24.944500 kernel: acpiphp: Slot [16] registered Jul 2 00:45:24.944520 kernel: acpiphp: Slot [17] registered Jul 2 00:45:24.944536 kernel: acpiphp: Slot [18] registered Jul 2 00:45:24.944552 kernel: acpiphp: Slot [19] registered Jul 2 00:45:24.944568 kernel: acpiphp: Slot [20] registered Jul 2 00:45:24.944584 kernel: acpiphp: Slot [21] registered Jul 2 00:45:24.944600 kernel: acpiphp: Slot [22] registered Jul 2 00:45:24.944616 kernel: acpiphp: Slot [23] registered Jul 2 00:45:24.944631 kernel: acpiphp: Slot [24] registered Jul 2 00:45:24.944647 kernel: acpiphp: Slot [25] registered Jul 2 00:45:24.944663 kernel: acpiphp: Slot [26] registered Jul 2 00:45:24.944683 kernel: acpiphp: Slot [27] registered Jul 2 00:45:24.944699 kernel: acpiphp: Slot [28] registered Jul 2 00:45:24.944714 kernel: acpiphp: Slot [29] registered Jul 2 00:45:24.944730 kernel: acpiphp: Slot [30] registered Jul 2 00:45:24.944746 kernel: acpiphp: Slot [31] registered Jul 2 00:45:24.944762 kernel: PCI host bridge to bus 0000:00 Jul 2 00:45:24.944979 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jul 2 00:45:24.969639 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 2 00:45:24.969910 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jul 2 00:45:24.970095 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jul 2 00:45:24.970346 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jul 2 00:45:24.970568 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jul 2 00:45:24.970793 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jul 2 00:45:24.971017 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jul 2 00:45:24.971248 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jul 2 00:45:24.971451 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 00:45:24.971665 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jul 2 00:45:24.971867 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jul 2 00:45:24.972067 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jul 2 00:45:24.973356 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jul 2 00:45:24.973577 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jul 2 00:45:24.973788 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jul 2 00:45:24.973999 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jul 2 00:45:24.975332 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jul 2 00:45:24.975568 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jul 2 00:45:24.975777 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jul 2 00:45:24.975964 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jul 2 00:45:24.976144 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 2 00:45:24.976368 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jul 2 00:45:24.976392 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 2 00:45:24.976410 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 2 00:45:24.976427 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 2 00:45:24.976443 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 2 00:45:24.976460 kernel: iommu: Default domain type: Translated Jul 2 00:45:24.976477 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 2 00:45:24.976493 kernel: vgaarb: loaded Jul 2 00:45:24.976509 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 00:45:24.976531 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 00:45:24.976547 kernel: PTP clock support registered Jul 2 00:45:24.976563 kernel: Registered efivars operations Jul 2 00:45:24.976580 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 2 00:45:24.976596 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 00:45:24.976612 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 00:45:24.976629 kernel: pnp: PnP ACPI init Jul 2 00:45:24.976846 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jul 2 00:45:24.976875 kernel: pnp: PnP ACPI: found 1 devices Jul 2 00:45:24.976893 kernel: NET: Registered PF_INET protocol family Jul 2 00:45:24.976909 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 00:45:24.976926 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 2 00:45:24.976942 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 00:45:24.976959 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 2 00:45:24.976976 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Jul 2 00:45:24.976992 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 2 00:45:24.977009 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:45:24.977029 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 2 00:45:24.977046 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 00:45:24.977062 kernel: PCI: CLS 0 bytes, default 64 Jul 2 00:45:24.977078 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jul 2 00:45:24.977095 kernel: kvm [1]: HYP mode not available Jul 2 00:45:24.977111 kernel: Initialise system trusted keyrings Jul 2 00:45:24.977128 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 2 00:45:24.977144 kernel: Key type asymmetric registered Jul 2 00:45:24.977160 kernel: Asymmetric key parser 'x509' registered Jul 2 00:45:24.977201 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 00:45:24.977219 kernel: io scheduler mq-deadline registered Jul 2 00:45:24.977235 kernel: io scheduler kyber registered Jul 2 00:45:24.977252 kernel: io scheduler bfq registered Jul 2 00:45:24.977464 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jul 2 00:45:24.977489 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 2 00:45:24.977506 kernel: ACPI: button: Power Button [PWRB] Jul 2 00:45:24.977523 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jul 2 00:45:24.977544 kernel: ACPI: button: Sleep Button [SLPB] Jul 2 00:45:24.977561 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 00:45:24.977578 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jul 2 00:45:24.977777 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jul 2 00:45:24.977800 kernel: printk: console [ttyS0] disabled Jul 2 00:45:24.977817 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jul 2 00:45:24.977833 kernel: printk: console [ttyS0] enabled Jul 2 00:45:24.977850 kernel: printk: bootconsole [uart0] disabled Jul 2 00:45:24.977866 kernel: thunder_xcv, ver 1.0 Jul 2 00:45:24.977882 kernel: thunder_bgx, ver 1.0 Jul 2 00:45:24.977903 kernel: nicpf, ver 1.0 Jul 2 00:45:24.977919 kernel: nicvf, ver 1.0 Jul 2 00:45:24.978123 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 2 00:45:24.989690 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-07-02T00:45:24 UTC (1719881124) Jul 2 00:45:24.989735 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 00:45:24.989753 kernel: NET: Registered PF_INET6 protocol family Jul 2 00:45:24.989771 kernel: Segment Routing with IPv6 Jul 2 00:45:24.989787 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 00:45:24.989812 kernel: NET: Registered PF_PACKET protocol family Jul 2 00:45:24.989829 kernel: Key type dns_resolver registered Jul 2 00:45:24.989845 kernel: registered taskstats version 1 Jul 2 00:45:24.989862 kernel: Loading compiled-in X.509 certificates Jul 2 00:45:24.989879 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: c418313b450e4055b23e41c11cb6dc415de0265d' Jul 2 00:45:24.989895 kernel: Key type .fscrypt registered Jul 2 00:45:24.989911 kernel: Key type fscrypt-provisioning registered Jul 2 00:45:24.989927 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 00:45:24.989943 kernel: ima: Allocated hash algorithm: sha1 Jul 2 00:45:24.989964 kernel: ima: No architecture policies found Jul 2 00:45:24.989980 kernel: clk: Disabling unused clocks Jul 2 00:45:24.989996 kernel: Freeing unused kernel memory: 36352K Jul 2 00:45:24.990012 kernel: Run /init as init process Jul 2 00:45:24.990028 kernel: with arguments: Jul 2 00:45:24.990044 kernel: /init Jul 2 00:45:24.990060 kernel: with environment: Jul 2 00:45:24.990077 kernel: HOME=/ Jul 2 00:45:24.990093 kernel: TERM=linux Jul 2 00:45:24.990113 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 00:45:24.990134 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:45:24.990155 systemd[1]: Detected virtualization amazon. Jul 2 00:45:24.990196 systemd[1]: Detected architecture arm64. Jul 2 00:45:24.990216 systemd[1]: Running in initrd. Jul 2 00:45:24.990234 systemd[1]: No hostname configured, using default hostname. Jul 2 00:45:24.990253 systemd[1]: Hostname set to . Jul 2 00:45:24.990276 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:45:24.990294 systemd[1]: Queued start job for default target initrd.target. Jul 2 00:45:24.990312 systemd[1]: Started systemd-ask-password-console.path. Jul 2 00:45:24.990330 systemd[1]: Reached target cryptsetup.target. Jul 2 00:45:24.990348 systemd[1]: Reached target paths.target. Jul 2 00:45:24.990365 systemd[1]: Reached target slices.target. Jul 2 00:45:24.990383 systemd[1]: Reached target swap.target. Jul 2 00:45:24.990401 systemd[1]: Reached target timers.target. Jul 2 00:45:24.990423 systemd[1]: Listening on iscsid.socket. Jul 2 00:45:24.990441 systemd[1]: Listening on iscsiuio.socket. Jul 2 00:45:24.990459 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 00:45:24.990477 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 00:45:24.990495 systemd[1]: Listening on systemd-journald.socket. Jul 2 00:45:24.990512 systemd[1]: Listening on systemd-networkd.socket. Jul 2 00:45:24.990530 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 00:45:24.990548 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 00:45:24.990570 systemd[1]: Reached target sockets.target. Jul 2 00:45:24.990588 systemd[1]: Starting kmod-static-nodes.service... Jul 2 00:45:24.990606 systemd[1]: Finished network-cleanup.service. Jul 2 00:45:24.990623 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 00:45:24.990641 systemd[1]: Starting systemd-journald.service... Jul 2 00:45:24.990659 systemd[1]: Starting systemd-modules-load.service... Jul 2 00:45:24.990677 systemd[1]: Starting systemd-resolved.service... Jul 2 00:45:24.990695 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 00:45:24.990713 systemd[1]: Finished kmod-static-nodes.service. Jul 2 00:45:24.990735 kernel: audit: type=1130 audit(1719881124.948:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:24.990754 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 00:45:24.990785 kernel: audit: type=1130 audit(1719881124.962:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:24.990810 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 00:45:24.990829 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 00:45:24.990850 systemd-journald[309]: Journal started Jul 2 00:45:24.990941 systemd-journald[309]: Runtime Journal (/run/log/journal/ec2f4d09a03dd5d6cef3c430edc363a9) is 8.0M, max 75.4M, 67.4M free. Jul 2 00:45:24.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:24.962000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:24.938656 systemd-modules-load[310]: Inserted module 'overlay' Jul 2 00:45:24.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.005187 systemd[1]: Started systemd-journald.service. Jul 2 00:45:25.005234 kernel: audit: type=1130 audit(1719881124.990:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.005000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.007249 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 00:45:25.021039 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 00:45:25.030603 kernel: audit: type=1130 audit(1719881125.005:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.030639 kernel: audit: type=1130 audit(1719881125.018:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.047194 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 00:45:25.056321 systemd-modules-load[310]: Inserted module 'br_netfilter' Jul 2 00:45:25.057201 kernel: Bridge firewalling registered Jul 2 00:45:25.063007 systemd-resolved[311]: Positive Trust Anchors: Jul 2 00:45:25.063036 systemd-resolved[311]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:45:25.063093 systemd-resolved[311]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:45:25.096202 kernel: SCSI subsystem initialized Jul 2 00:45:25.111598 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 00:45:25.111683 kernel: device-mapper: uevent: version 1.0.3 Jul 2 00:45:25.115421 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 00:45:25.113006 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 00:45:25.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.128469 kernel: audit: type=1130 audit(1719881125.120:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.128314 systemd[1]: Starting dracut-cmdline.service... Jul 2 00:45:25.137278 systemd-modules-load[310]: Inserted module 'dm_multipath' Jul 2 00:45:25.140723 systemd[1]: Finished systemd-modules-load.service. Jul 2 00:45:25.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.155852 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:45:25.162739 kernel: audit: type=1130 audit(1719881125.146:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.162835 dracut-cmdline[326]: dracut-dracut-053 Jul 2 00:45:25.167899 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=7b86ecfcd4701bdf4668db795601b20c118ac0b117c34a9b3836e0a5236b73b0 Jul 2 00:45:25.203247 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:45:25.215298 kernel: audit: type=1130 audit(1719881125.205:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.286201 kernel: Loading iSCSI transport class v2.0-870. Jul 2 00:45:25.307207 kernel: iscsi: registered transport (tcp) Jul 2 00:45:25.333752 kernel: iscsi: registered transport (qla4xxx) Jul 2 00:45:25.333834 kernel: QLogic iSCSI HBA Driver Jul 2 00:45:25.557909 systemd-resolved[311]: Defaulting to hostname 'linux'. Jul 2 00:45:25.560730 kernel: random: crng init done Jul 2 00:45:25.559850 systemd[1]: Started systemd-resolved.service. Jul 2 00:45:25.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.563910 systemd[1]: Reached target nss-lookup.target. Jul 2 00:45:25.572898 kernel: audit: type=1130 audit(1719881125.562:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.588945 systemd[1]: Finished dracut-cmdline.service. Jul 2 00:45:25.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:25.593849 systemd[1]: Starting dracut-pre-udev.service... Jul 2 00:45:25.659210 kernel: raid6: neonx8 gen() 6354 MB/s Jul 2 00:45:25.677197 kernel: raid6: neonx8 xor() 4623 MB/s Jul 2 00:45:25.695196 kernel: raid6: neonx4 gen() 6539 MB/s Jul 2 00:45:25.713197 kernel: raid6: neonx4 xor() 4801 MB/s Jul 2 00:45:25.731197 kernel: raid6: neonx2 gen() 5755 MB/s Jul 2 00:45:25.749196 kernel: raid6: neonx2 xor() 4416 MB/s Jul 2 00:45:25.767196 kernel: raid6: neonx1 gen() 4472 MB/s Jul 2 00:45:25.785197 kernel: raid6: neonx1 xor() 3592 MB/s Jul 2 00:45:25.803196 kernel: raid6: int64x8 gen() 3433 MB/s Jul 2 00:45:25.821197 kernel: raid6: int64x8 xor() 2053 MB/s Jul 2 00:45:25.839197 kernel: raid6: int64x4 gen() 3827 MB/s Jul 2 00:45:25.857197 kernel: raid6: int64x4 xor() 2165 MB/s Jul 2 00:45:25.875195 kernel: raid6: int64x2 gen() 3590 MB/s Jul 2 00:45:25.893198 kernel: raid6: int64x2 xor() 1918 MB/s Jul 2 00:45:25.911196 kernel: raid6: int64x1 gen() 2755 MB/s Jul 2 00:45:25.930150 kernel: raid6: int64x1 xor() 1396 MB/s Jul 2 00:45:25.930198 kernel: raid6: using algorithm neonx4 gen() 6539 MB/s Jul 2 00:45:25.930223 kernel: raid6: .... xor() 4801 MB/s, rmw enabled Jul 2 00:45:25.931772 kernel: raid6: using neon recovery algorithm Jul 2 00:45:25.950204 kernel: xor: measuring software checksum speed Jul 2 00:45:25.952201 kernel: 8regs : 9333 MB/sec Jul 2 00:45:25.955196 kernel: 32regs : 11117 MB/sec Jul 2 00:45:25.958233 kernel: arm64_neon : 9292 MB/sec Jul 2 00:45:25.958265 kernel: xor: using function: 32regs (11117 MB/sec) Jul 2 00:45:26.048212 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jul 2 00:45:26.065744 systemd[1]: Finished dracut-pre-udev.service. Jul 2 00:45:26.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:26.067000 audit: BPF prog-id=7 op=LOAD Jul 2 00:45:26.067000 audit: BPF prog-id=8 op=LOAD Jul 2 00:45:26.070685 systemd[1]: Starting systemd-udevd.service... Jul 2 00:45:26.099101 systemd-udevd[508]: Using default interface naming scheme 'v252'. Jul 2 00:45:26.109928 systemd[1]: Started systemd-udevd.service. Jul 2 00:45:26.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:26.115914 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 00:45:26.145395 dracut-pre-trigger[518]: rd.md=0: removing MD RAID activation Jul 2 00:45:26.205196 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 00:45:26.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:26.209558 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 00:45:26.319001 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 00:45:26.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:26.435739 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 2 00:45:26.435808 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jul 2 00:45:26.447186 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jul 2 00:45:26.447533 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jul 2 00:45:26.457197 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:ee:7c:e4:bb:21 Jul 2 00:45:26.459874 (udev-worker)[564]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:45:26.466504 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jul 2 00:45:26.466576 kernel: nvme nvme0: pci function 0000:00:04.0 Jul 2 00:45:26.476218 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jul 2 00:45:26.481263 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 00:45:26.481311 kernel: GPT:9289727 != 16777215 Jul 2 00:45:26.481335 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 00:45:26.483106 kernel: GPT:9289727 != 16777215 Jul 2 00:45:26.484229 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 00:45:26.487148 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:45:26.552211 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (558) Jul 2 00:45:26.572161 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 00:45:26.639504 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 00:45:26.676784 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 00:45:26.677580 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 00:45:26.690518 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 00:45:26.696453 systemd[1]: Starting disk-uuid.service... Jul 2 00:45:26.709311 disk-uuid[667]: Primary Header is updated. Jul 2 00:45:26.709311 disk-uuid[667]: Secondary Entries is updated. Jul 2 00:45:26.709311 disk-uuid[667]: Secondary Header is updated. Jul 2 00:45:26.718215 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:45:26.727205 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:45:27.734193 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jul 2 00:45:27.736215 disk-uuid[668]: The operation has completed successfully. Jul 2 00:45:27.899599 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 00:45:27.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:27.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:27.899795 systemd[1]: Finished disk-uuid.service. Jul 2 00:45:27.913634 systemd[1]: Starting verity-setup.service... Jul 2 00:45:27.949748 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jul 2 00:45:28.035018 systemd[1]: Found device dev-mapper-usr.device. Jul 2 00:45:28.039696 systemd[1]: Mounting sysusr-usr.mount... Jul 2 00:45:28.043440 systemd[1]: Finished verity-setup.service. Jul 2 00:45:28.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:28.129215 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 00:45:28.130599 systemd[1]: Mounted sysusr-usr.mount. Jul 2 00:45:28.133239 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 00:45:28.136892 systemd[1]: Starting ignition-setup.service... Jul 2 00:45:28.139552 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 00:45:28.167864 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:45:28.167929 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:45:28.170017 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 2 00:45:28.181368 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:45:28.198555 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 00:45:28.223643 systemd[1]: Finished ignition-setup.service. Jul 2 00:45:28.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:28.227833 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 00:45:28.298132 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 00:45:28.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:28.300000 audit: BPF prog-id=9 op=LOAD Jul 2 00:45:28.302909 systemd[1]: Starting systemd-networkd.service... Jul 2 00:45:28.350110 systemd-networkd[1107]: lo: Link UP Jul 2 00:45:28.350134 systemd-networkd[1107]: lo: Gained carrier Jul 2 00:45:28.353607 systemd-networkd[1107]: Enumeration completed Jul 2 00:45:28.354066 systemd-networkd[1107]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:45:28.358527 systemd[1]: Started systemd-networkd.service. Jul 2 00:45:28.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:28.362240 systemd[1]: Reached target network.target. Jul 2 00:45:28.365664 systemd-networkd[1107]: eth0: Link UP Jul 2 00:45:28.365684 systemd-networkd[1107]: eth0: Gained carrier Jul 2 00:45:28.369307 systemd[1]: Starting iscsiuio.service... Jul 2 00:45:28.383046 systemd[1]: Started iscsiuio.service. Jul 2 00:45:28.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:28.387033 systemd[1]: Starting iscsid.service... Jul 2 00:45:28.388476 systemd-networkd[1107]: eth0: DHCPv4 address 172.31.20.46/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:45:28.397706 iscsid[1112]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:45:28.397706 iscsid[1112]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 00:45:28.397706 iscsid[1112]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 00:45:28.397706 iscsid[1112]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 00:45:28.397706 iscsid[1112]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 00:45:28.416074 iscsid[1112]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 00:45:28.420851 systemd[1]: Started iscsid.service. Jul 2 00:45:28.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:28.436652 systemd[1]: Starting dracut-initqueue.service... Jul 2 00:45:28.460722 systemd[1]: Finished dracut-initqueue.service. Jul 2 00:45:28.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:28.464797 systemd[1]: Reached target remote-fs-pre.target. Jul 2 00:45:28.466504 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 00:45:28.471071 systemd[1]: Reached target remote-fs.target. Jul 2 00:45:28.475423 systemd[1]: Starting dracut-pre-mount.service... Jul 2 00:45:28.492826 systemd[1]: Finished dracut-pre-mount.service. Jul 2 00:45:28.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.097969 ignition[1046]: Ignition 2.14.0 Jul 2 00:45:29.097996 ignition[1046]: Stage: fetch-offline Jul 2 00:45:29.098332 ignition[1046]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:45:29.098397 ignition[1046]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:45:29.124771 ignition[1046]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:45:29.125802 ignition[1046]: Ignition finished successfully Jul 2 00:45:29.130009 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 00:45:29.142641 kernel: kauditd_printk_skb: 18 callbacks suppressed Jul 2 00:45:29.142687 kernel: audit: type=1130 audit(1719881129.130:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.133081 systemd[1]: Starting ignition-fetch.service... Jul 2 00:45:29.150888 ignition[1131]: Ignition 2.14.0 Jul 2 00:45:29.150917 ignition[1131]: Stage: fetch Jul 2 00:45:29.151377 ignition[1131]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:45:29.152475 ignition[1131]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:45:29.171618 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:45:29.173748 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:45:29.182764 ignition[1131]: INFO : PUT result: OK Jul 2 00:45:29.185721 ignition[1131]: DEBUG : parsed url from cmdline: "" Jul 2 00:45:29.185721 ignition[1131]: INFO : no config URL provided Jul 2 00:45:29.185721 ignition[1131]: INFO : reading system config file "/usr/lib/ignition/user.ign" Jul 2 00:45:29.191157 ignition[1131]: INFO : no config at "/usr/lib/ignition/user.ign" Jul 2 00:45:29.191157 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:45:29.191157 ignition[1131]: INFO : PUT result: OK Jul 2 00:45:29.191157 ignition[1131]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jul 2 00:45:29.198957 ignition[1131]: INFO : GET result: OK Jul 2 00:45:29.200432 ignition[1131]: DEBUG : parsing config with SHA512: bb721656a5f56e1e473c851a0619f0f5ac0cbe868bb8af62ee48c5d315bf9d22e84110492948f39c22523e1f9eaf56d80b6d3fad1b9577db58d44457a0698a5a Jul 2 00:45:29.210396 unknown[1131]: fetched base config from "system" Jul 2 00:45:29.210424 unknown[1131]: fetched base config from "system" Jul 2 00:45:29.210440 unknown[1131]: fetched user config from "aws" Jul 2 00:45:29.216003 ignition[1131]: fetch: fetch complete Jul 2 00:45:29.216030 ignition[1131]: fetch: fetch passed Jul 2 00:45:29.216155 ignition[1131]: Ignition finished successfully Jul 2 00:45:29.222493 systemd[1]: Finished ignition-fetch.service. Jul 2 00:45:29.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.226672 systemd[1]: Starting ignition-kargs.service... Jul 2 00:45:29.234355 kernel: audit: type=1130 audit(1719881129.224:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.247939 ignition[1137]: Ignition 2.14.0 Jul 2 00:45:29.247968 ignition[1137]: Stage: kargs Jul 2 00:45:29.248295 ignition[1137]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:45:29.248353 ignition[1137]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:45:29.262962 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:45:29.265038 ignition[1137]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:45:29.268639 ignition[1137]: INFO : PUT result: OK Jul 2 00:45:29.273093 ignition[1137]: kargs: kargs passed Jul 2 00:45:29.275095 systemd[1]: Finished ignition-kargs.service. Jul 2 00:45:29.289018 kernel: audit: type=1130 audit(1719881129.280:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.273200 ignition[1137]: Ignition finished successfully Jul 2 00:45:29.291258 systemd[1]: Starting ignition-disks.service... Jul 2 00:45:29.305912 ignition[1143]: Ignition 2.14.0 Jul 2 00:45:29.305941 ignition[1143]: Stage: disks Jul 2 00:45:29.306264 ignition[1143]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:45:29.306323 ignition[1143]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:45:29.320337 ignition[1143]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:45:29.322448 ignition[1143]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:45:29.325186 ignition[1143]: INFO : PUT result: OK Jul 2 00:45:29.330003 ignition[1143]: disks: disks passed Jul 2 00:45:29.330109 ignition[1143]: Ignition finished successfully Jul 2 00:45:29.334232 systemd[1]: Finished ignition-disks.service. Jul 2 00:45:29.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.337093 systemd[1]: Reached target initrd-root-device.target. Jul 2 00:45:29.360651 kernel: audit: type=1130 audit(1719881129.335:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.344523 systemd[1]: Reached target local-fs-pre.target. Jul 2 00:45:29.346033 systemd[1]: Reached target local-fs.target. Jul 2 00:45:29.358873 systemd[1]: Reached target sysinit.target. Jul 2 00:45:29.361672 systemd[1]: Reached target basic.target. Jul 2 00:45:29.369626 systemd[1]: Starting systemd-fsck-root.service... Jul 2 00:45:29.411761 systemd-fsck[1151]: ROOT: clean, 614/553520 files, 56019/553472 blocks Jul 2 00:45:29.419146 systemd[1]: Finished systemd-fsck-root.service. Jul 2 00:45:29.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.424783 systemd[1]: Mounting sysroot.mount... Jul 2 00:45:29.431053 kernel: audit: type=1130 audit(1719881129.419:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.448217 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 00:45:29.449506 systemd[1]: Mounted sysroot.mount. Jul 2 00:45:29.451981 systemd[1]: Reached target initrd-root-fs.target. Jul 2 00:45:29.465121 systemd[1]: Mounting sysroot-usr.mount... Jul 2 00:45:29.467323 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 00:45:29.467399 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 00:45:29.467450 systemd[1]: Reached target ignition-diskful.target. Jul 2 00:45:29.477581 systemd[1]: Mounted sysroot-usr.mount. Jul 2 00:45:29.494824 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 00:45:29.504421 systemd-networkd[1107]: eth0: Gained IPv6LL Jul 2 00:45:29.508564 systemd[1]: Starting initrd-setup-root.service... Jul 2 00:45:29.516725 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1168) Jul 2 00:45:29.521793 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:45:29.521862 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:45:29.523941 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 2 00:45:29.530194 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:45:29.532462 initrd-setup-root[1173]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 00:45:29.535141 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 00:45:29.553770 initrd-setup-root[1199]: cut: /sysroot/etc/group: No such file or directory Jul 2 00:45:29.562112 initrd-setup-root[1207]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 00:45:29.570439 initrd-setup-root[1215]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 00:45:29.796041 systemd[1]: Finished initrd-setup-root.service. Jul 2 00:45:29.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.799121 systemd[1]: Starting ignition-mount.service... Jul 2 00:45:29.809202 kernel: audit: type=1130 audit(1719881129.796:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.810282 systemd[1]: Starting sysroot-boot.service... Jul 2 00:45:29.820767 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 00:45:29.820941 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 00:45:29.855973 systemd[1]: Finished sysroot-boot.service. Jul 2 00:45:29.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.865210 kernel: audit: type=1130 audit(1719881129.857:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.869218 ignition[1235]: INFO : Ignition 2.14.0 Jul 2 00:45:29.869218 ignition[1235]: INFO : Stage: mount Jul 2 00:45:29.872348 ignition[1235]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:45:29.872348 ignition[1235]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:45:29.889509 ignition[1235]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:45:29.892251 ignition[1235]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:45:29.895088 ignition[1235]: INFO : PUT result: OK Jul 2 00:45:29.900393 ignition[1235]: INFO : mount: mount passed Jul 2 00:45:29.901960 ignition[1235]: INFO : Ignition finished successfully Jul 2 00:45:29.904983 systemd[1]: Finished ignition-mount.service. Jul 2 00:45:29.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.915543 systemd[1]: Starting ignition-files.service... Jul 2 00:45:29.918261 kernel: audit: type=1130 audit(1719881129.906:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:29.925219 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 00:45:29.941210 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1244) Jul 2 00:45:29.946450 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jul 2 00:45:29.946506 kernel: BTRFS info (device nvme0n1p6): using free space tree Jul 2 00:45:29.946531 kernel: BTRFS info (device nvme0n1p6): has skinny extents Jul 2 00:45:29.954198 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jul 2 00:45:29.958928 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 00:45:29.977336 ignition[1263]: INFO : Ignition 2.14.0 Jul 2 00:45:29.979103 ignition[1263]: INFO : Stage: files Jul 2 00:45:29.980765 ignition[1263]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:45:29.983055 ignition[1263]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:45:29.997333 ignition[1263]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:45:29.999562 ignition[1263]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:45:30.002146 ignition[1263]: INFO : PUT result: OK Jul 2 00:45:30.007146 ignition[1263]: DEBUG : files: compiled without relabeling support, skipping Jul 2 00:45:30.010520 ignition[1263]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 00:45:30.010520 ignition[1263]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 00:45:30.046516 ignition[1263]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 00:45:30.049179 ignition[1263]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 00:45:30.054601 unknown[1263]: wrote ssh authorized keys file for user: core Jul 2 00:45:30.057582 ignition[1263]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 00:45:30.060481 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:45:30.063879 ignition[1263]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 2 00:45:30.122019 ignition[1263]: INFO : GET result: OK Jul 2 00:45:30.244141 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 2 00:45:30.248152 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:45:30.248152 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 00:45:30.248152 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 2 00:45:30.248152 ignition[1263]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 2 00:45:30.268521 ignition[1263]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem461000046" Jul 2 00:45:30.276598 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1268) Jul 2 00:45:30.276638 ignition[1263]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem461000046": device or resource busy Jul 2 00:45:30.276638 ignition[1263]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem461000046", trying btrfs: device or resource busy Jul 2 00:45:30.276638 ignition[1263]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem461000046" Jul 2 00:45:30.287875 ignition[1263]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem461000046" Jul 2 00:45:30.287875 ignition[1263]: INFO : op(3): [started] unmounting "/mnt/oem461000046" Jul 2 00:45:30.287875 ignition[1263]: INFO : op(3): [finished] unmounting "/mnt/oem461000046" Jul 2 00:45:30.287875 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Jul 2 00:45:30.297531 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:45:30.300855 ignition[1263]: INFO : GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 2 00:45:30.745194 ignition[1263]: INFO : GET result: OK Jul 2 00:45:30.917015 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 00:45:30.920266 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Jul 2 00:45:30.923476 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 00:45:30.926524 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:45:30.929753 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 00:45:30.932826 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:45:30.936150 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 00:45:30.950881 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:45:30.954255 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 00:45:30.957341 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 00:45:30.961853 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 00:45:30.966858 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 00:45:30.970254 ignition[1263]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 2 00:45:30.983089 ignition[1263]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1256781694" Jul 2 00:45:30.983089 ignition[1263]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1256781694": device or resource busy Jul 2 00:45:30.983089 ignition[1263]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1256781694", trying btrfs: device or resource busy Jul 2 00:45:30.983089 ignition[1263]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1256781694" Jul 2 00:45:30.983089 ignition[1263]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1256781694" Jul 2 00:45:30.983089 ignition[1263]: INFO : op(6): [started] unmounting "/mnt/oem1256781694" Jul 2 00:45:30.983089 ignition[1263]: INFO : op(6): [finished] unmounting "/mnt/oem1256781694" Jul 2 00:45:30.983089 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Jul 2 00:45:30.983089 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 2 00:45:30.983089 ignition[1263]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 2 00:45:31.015752 systemd[1]: mnt-oem1256781694.mount: Deactivated successfully. Jul 2 00:45:31.034790 ignition[1263]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3638448940" Jul 2 00:45:31.037714 ignition[1263]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3638448940": device or resource busy Jul 2 00:45:31.040624 ignition[1263]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3638448940", trying btrfs: device or resource busy Jul 2 00:45:31.043837 ignition[1263]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3638448940" Jul 2 00:45:31.046391 ignition[1263]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3638448940" Jul 2 00:45:31.059280 ignition[1263]: INFO : op(9): [started] unmounting "/mnt/oem3638448940" Jul 2 00:45:31.059280 ignition[1263]: INFO : op(9): [finished] unmounting "/mnt/oem3638448940" Jul 2 00:45:31.059280 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Jul 2 00:45:31.059280 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 00:45:31.059280 ignition[1263]: INFO : GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jul 2 00:45:31.055679 systemd[1]: mnt-oem3638448940.mount: Deactivated successfully. Jul 2 00:45:31.406522 ignition[1263]: INFO : GET result: OK Jul 2 00:45:31.939467 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jul 2 00:45:31.946981 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 2 00:45:31.946981 ignition[1263]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Jul 2 00:45:31.956212 ignition[1263]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3232989587" Jul 2 00:45:31.958841 ignition[1263]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3232989587": device or resource busy Jul 2 00:45:31.958841 ignition[1263]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3232989587", trying btrfs: device or resource busy Jul 2 00:45:31.958841 ignition[1263]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3232989587" Jul 2 00:45:31.967387 ignition[1263]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3232989587" Jul 2 00:45:31.967387 ignition[1263]: INFO : op(c): [started] unmounting "/mnt/oem3232989587" Jul 2 00:45:31.973277 ignition[1263]: INFO : op(c): [finished] unmounting "/mnt/oem3232989587" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(10): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(10): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(11): [started] processing unit "amazon-ssm-agent.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(11): op(12): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(11): op(12): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(11): [finished] processing unit "amazon-ssm-agent.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(13): [started] processing unit "nvidia.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(13): [finished] processing unit "nvidia.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(14): [started] processing unit "prepare-helm.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(14): op(15): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(14): op(15): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(14): [finished] processing unit "prepare-helm.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(16): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(16): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(17): [started] setting preset to enabled for "amazon-ssm-agent.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(17): [finished] setting preset to enabled for "amazon-ssm-agent.service" Jul 2 00:45:31.973277 ignition[1263]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" Jul 2 00:45:32.023210 ignition[1263]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" Jul 2 00:45:32.023210 ignition[1263]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Jul 2 00:45:32.023210 ignition[1263]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 00:45:32.039014 systemd[1]: mnt-oem3232989587.mount: Deactivated successfully. Jul 2 00:45:32.060500 ignition[1263]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:45:32.065503 ignition[1263]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 00:45:32.065503 ignition[1263]: INFO : files: files passed Jul 2 00:45:32.065503 ignition[1263]: INFO : Ignition finished successfully Jul 2 00:45:32.071939 systemd[1]: Finished ignition-files.service. Jul 2 00:45:32.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.082218 kernel: audit: type=1130 audit(1719881132.074:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.090549 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 00:45:32.094895 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 00:45:32.099021 systemd[1]: Starting ignition-quench.service... Jul 2 00:45:32.109748 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 00:45:32.120332 kernel: audit: type=1130 audit(1719881132.111:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.110804 systemd[1]: Finished ignition-quench.service. Jul 2 00:45:32.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.124193 initrd-setup-root-after-ignition[1288]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 00:45:32.128154 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 00:45:32.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.131728 systemd[1]: Reached target ignition-complete.target. Jul 2 00:45:32.136119 systemd[1]: Starting initrd-parse-etc.service... Jul 2 00:45:32.164110 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 00:45:32.165927 systemd[1]: Finished initrd-parse-etc.service. Jul 2 00:45:32.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.169080 systemd[1]: Reached target initrd-fs.target. Jul 2 00:45:32.175662 systemd[1]: Reached target initrd.target. Jul 2 00:45:32.178268 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 00:45:32.182048 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 00:45:32.205455 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 00:45:32.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.209721 systemd[1]: Starting initrd-cleanup.service... Jul 2 00:45:32.230046 systemd[1]: Stopped target nss-lookup.target. Jul 2 00:45:32.233231 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 00:45:32.236574 systemd[1]: Stopped target timers.target. Jul 2 00:45:32.239525 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 00:45:32.241430 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 00:45:32.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.244625 systemd[1]: Stopped target initrd.target. Jul 2 00:45:32.247443 systemd[1]: Stopped target basic.target. Jul 2 00:45:32.250189 systemd[1]: Stopped target ignition-complete.target. Jul 2 00:45:32.253354 systemd[1]: Stopped target ignition-diskful.target. Jul 2 00:45:32.256493 systemd[1]: Stopped target initrd-root-device.target. Jul 2 00:45:32.259758 systemd[1]: Stopped target remote-fs.target. Jul 2 00:45:32.262587 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 00:45:32.265629 systemd[1]: Stopped target sysinit.target. Jul 2 00:45:32.268417 systemd[1]: Stopped target local-fs.target. Jul 2 00:45:32.271237 systemd[1]: Stopped target local-fs-pre.target. Jul 2 00:45:32.274215 systemd[1]: Stopped target swap.target. Jul 2 00:45:32.276836 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 00:45:32.278764 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 00:45:32.280000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.281852 systemd[1]: Stopped target cryptsetup.target. Jul 2 00:45:32.284745 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 00:45:32.286635 systemd[1]: Stopped dracut-initqueue.service. Jul 2 00:45:32.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.289718 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 00:45:32.291935 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 00:45:32.294000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.295596 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 00:45:32.297451 systemd[1]: Stopped ignition-files.service. Jul 2 00:45:32.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.301806 systemd[1]: Stopping ignition-mount.service... Jul 2 00:45:32.323983 ignition[1301]: INFO : Ignition 2.14.0 Jul 2 00:45:32.323983 ignition[1301]: INFO : Stage: umount Jul 2 00:45:32.323983 ignition[1301]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 00:45:32.323983 ignition[1301]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Jul 2 00:45:32.339454 systemd[1]: Stopping sysroot-boot.service... Jul 2 00:45:32.354732 ignition[1301]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jul 2 00:45:32.354732 ignition[1301]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jul 2 00:45:32.354732 ignition[1301]: INFO : PUT result: OK Jul 2 00:45:32.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.352634 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 00:45:32.368938 ignition[1301]: INFO : umount: umount passed Jul 2 00:45:32.368938 ignition[1301]: INFO : Ignition finished successfully Jul 2 00:45:32.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.354137 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 00:45:32.361343 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 00:45:32.361625 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 00:45:32.383949 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 00:45:32.384745 systemd[1]: Finished initrd-cleanup.service. Jul 2 00:45:32.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.389197 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 00:45:32.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.389402 systemd[1]: Stopped ignition-mount.service. Jul 2 00:45:32.394340 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 00:45:32.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.394826 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 00:45:32.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.394908 systemd[1]: Stopped ignition-disks.service. Jul 2 00:45:32.396481 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 00:45:32.396561 systemd[1]: Stopped ignition-kargs.service. Jul 2 00:45:32.399949 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 00:45:32.400786 systemd[1]: Stopped ignition-fetch.service. Jul 2 00:45:32.402818 systemd[1]: Stopped target network.target. Jul 2 00:45:32.413755 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 00:45:32.414236 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 00:45:32.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.418749 systemd[1]: Stopped target paths.target. Jul 2 00:45:32.421344 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 00:45:32.425226 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 00:45:32.428419 systemd[1]: Stopped target slices.target. Jul 2 00:45:32.431102 systemd[1]: Stopped target sockets.target. Jul 2 00:45:32.433926 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 00:45:32.434084 systemd[1]: Closed iscsid.socket. Jul 2 00:45:32.437928 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 00:45:32.437992 systemd[1]: Closed iscsiuio.socket. Jul 2 00:45:32.441590 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 00:45:32.441678 systemd[1]: Stopped ignition-setup.service. Jul 2 00:45:32.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.449129 systemd[1]: Stopping systemd-networkd.service... Jul 2 00:45:32.450726 systemd[1]: Stopping systemd-resolved.service... Jul 2 00:45:32.458858 systemd-networkd[1107]: eth0: DHCPv6 lease lost Jul 2 00:45:32.462750 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 00:45:32.462967 systemd[1]: Stopped systemd-resolved.service. Jul 2 00:45:32.466000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.467904 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 00:45:32.481000 audit: BPF prog-id=6 op=UNLOAD Jul 2 00:45:32.483043 systemd[1]: Stopped systemd-networkd.service. Jul 2 00:45:32.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.486431 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 00:45:32.487000 audit: BPF prog-id=9 op=UNLOAD Jul 2 00:45:32.488298 systemd[1]: Stopped sysroot-boot.service. Jul 2 00:45:32.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.491312 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 00:45:32.491398 systemd[1]: Closed systemd-networkd.socket. Jul 2 00:45:32.494461 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 00:45:32.497665 systemd[1]: Stopped initrd-setup-root.service. Jul 2 00:45:32.499000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.501845 systemd[1]: Stopping network-cleanup.service... Jul 2 00:45:32.506318 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 00:45:32.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.506446 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 00:45:32.509479 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:45:32.509570 systemd[1]: Stopped systemd-sysctl.service. Jul 2 00:45:32.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.511259 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 00:45:32.511341 systemd[1]: Stopped systemd-modules-load.service. Jul 2 00:45:32.521359 systemd[1]: Stopping systemd-udevd.service... Jul 2 00:45:32.524741 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 00:45:32.534226 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 00:45:32.535841 systemd[1]: Stopped network-cleanup.service. Jul 2 00:45:32.554106 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 00:45:32.554595 systemd[1]: Stopped systemd-udevd.service. Jul 2 00:45:32.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.559074 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 00:45:32.559558 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 00:45:32.563791 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 00:45:32.563961 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 00:45:32.568538 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 00:45:32.568647 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 00:45:32.570000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.573055 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 00:45:32.573157 systemd[1]: Stopped dracut-cmdline.service. Jul 2 00:45:32.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.577700 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 00:45:32.577790 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 00:45:32.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.583711 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 00:45:32.595796 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 2 00:45:32.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.595919 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Jul 2 00:45:32.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:32.601674 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 00:45:32.601774 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 00:45:32.603780 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 00:45:32.603866 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 00:45:32.608312 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 2 00:45:32.609250 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 00:45:32.609438 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 00:45:32.611359 systemd[1]: Reached target initrd-switch-root.target. Jul 2 00:45:32.616480 systemd[1]: Starting initrd-switch-root.service... Jul 2 00:45:32.636459 systemd[1]: Switching root. Jul 2 00:45:32.663516 iscsid[1112]: iscsid shutting down. Jul 2 00:45:32.664938 systemd-journald[309]: Received SIGTERM from PID 1 (n/a). Jul 2 00:45:32.665027 systemd-journald[309]: Journal stopped Jul 2 00:45:38.122761 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 00:45:38.122865 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 00:45:38.122902 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 00:45:38.122934 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 00:45:38.122968 kernel: SELinux: policy capability open_perms=1 Jul 2 00:45:38.122998 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 00:45:38.123033 kernel: SELinux: policy capability always_check_network=0 Jul 2 00:45:38.123064 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 00:45:38.123096 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 00:45:38.123127 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 00:45:38.123159 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 00:45:38.123213 systemd[1]: Successfully loaded SELinux policy in 99.663ms. Jul 2 00:45:38.123274 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.972ms. Jul 2 00:45:38.123309 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 00:45:38.123343 systemd[1]: Detected virtualization amazon. Jul 2 00:45:38.123373 systemd[1]: Detected architecture arm64. Jul 2 00:45:38.123402 systemd[1]: Detected first boot. Jul 2 00:45:38.123434 systemd[1]: Initializing machine ID from VM UUID. Jul 2 00:45:38.123466 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 00:45:38.123496 systemd[1]: Populated /etc with preset unit settings. Jul 2 00:45:38.123527 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:45:38.123560 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:45:38.123598 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:45:38.123635 kernel: kauditd_printk_skb: 55 callbacks suppressed Jul 2 00:45:38.123664 kernel: audit: type=1334 audit(1719881137.692:87): prog-id=12 op=LOAD Jul 2 00:45:38.123694 kernel: audit: type=1334 audit(1719881137.696:88): prog-id=3 op=UNLOAD Jul 2 00:45:38.123721 kernel: audit: type=1334 audit(1719881137.696:89): prog-id=13 op=LOAD Jul 2 00:45:38.123751 kernel: audit: type=1334 audit(1719881137.698:90): prog-id=14 op=LOAD Jul 2 00:45:38.123780 kernel: audit: type=1334 audit(1719881137.698:91): prog-id=4 op=UNLOAD Jul 2 00:45:38.123814 kernel: audit: type=1334 audit(1719881137.698:92): prog-id=5 op=UNLOAD Jul 2 00:45:38.123846 kernel: audit: type=1334 audit(1719881137.703:93): prog-id=15 op=LOAD Jul 2 00:45:38.123876 kernel: audit: type=1334 audit(1719881137.703:94): prog-id=12 op=UNLOAD Jul 2 00:45:38.123904 kernel: audit: type=1334 audit(1719881137.705:95): prog-id=16 op=LOAD Jul 2 00:45:38.123933 kernel: audit: type=1334 audit(1719881137.707:96): prog-id=17 op=LOAD Jul 2 00:45:38.123964 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 00:45:38.125763 systemd[1]: Stopped iscsiuio.service. Jul 2 00:45:38.126145 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 00:45:38.126688 systemd[1]: Stopped iscsid.service. Jul 2 00:45:38.127085 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 00:45:38.127191 systemd[1]: Stopped initrd-switch-root.service. Jul 2 00:45:38.127226 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 00:45:38.129214 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 00:45:38.129261 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 00:45:38.129296 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 00:45:38.129328 systemd[1]: Created slice system-getty.slice. Jul 2 00:45:38.129363 systemd[1]: Created slice system-modprobe.slice. Jul 2 00:45:38.129394 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 00:45:38.129425 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 00:45:38.129457 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 00:45:38.129488 systemd[1]: Created slice user.slice. Jul 2 00:45:38.129520 systemd[1]: Started systemd-ask-password-console.path. Jul 2 00:45:38.129552 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 00:45:38.129581 systemd[1]: Set up automount boot.automount. Jul 2 00:45:38.129613 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 00:45:38.129646 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 00:45:38.129675 systemd[1]: Stopped target initrd-fs.target. Jul 2 00:45:38.129704 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 00:45:38.129732 systemd[1]: Reached target integritysetup.target. Jul 2 00:45:38.129765 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 00:45:38.129794 systemd[1]: Reached target remote-fs.target. Jul 2 00:45:38.129824 systemd[1]: Reached target slices.target. Jul 2 00:45:38.129855 systemd[1]: Reached target swap.target. Jul 2 00:45:38.129885 systemd[1]: Reached target torcx.target. Jul 2 00:45:38.129921 systemd[1]: Reached target veritysetup.target. Jul 2 00:45:38.129951 systemd[1]: Listening on systemd-coredump.socket. Jul 2 00:45:38.129983 systemd[1]: Listening on systemd-initctl.socket. Jul 2 00:45:38.130012 systemd[1]: Listening on systemd-networkd.socket. Jul 2 00:45:38.130042 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 00:45:38.130071 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 00:45:38.130100 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 00:45:38.130129 systemd[1]: Mounting dev-hugepages.mount... Jul 2 00:45:38.131312 systemd[1]: Mounting dev-mqueue.mount... Jul 2 00:45:38.131357 systemd[1]: Mounting media.mount... Jul 2 00:45:38.131394 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 00:45:38.131423 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 00:45:38.131462 systemd[1]: Mounting tmp.mount... Jul 2 00:45:38.131494 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 00:45:38.131524 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:45:38.131555 systemd[1]: Starting kmod-static-nodes.service... Jul 2 00:45:38.131587 systemd[1]: Starting modprobe@configfs.service... Jul 2 00:45:38.131616 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:45:38.131645 systemd[1]: Starting modprobe@drm.service... Jul 2 00:45:38.131680 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:45:38.131709 systemd[1]: Starting modprobe@fuse.service... Jul 2 00:45:38.131738 systemd[1]: Starting modprobe@loop.service... Jul 2 00:45:38.131771 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 00:45:38.131802 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 00:45:38.131831 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 00:45:38.131913 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 00:45:38.131949 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 00:45:38.131979 kernel: loop: module loaded Jul 2 00:45:38.132014 systemd[1]: Stopped systemd-journald.service. Jul 2 00:45:38.132080 systemd[1]: Starting systemd-journald.service... Jul 2 00:45:38.132114 kernel: fuse: init (API version 7.34) Jul 2 00:45:38.132143 systemd[1]: Starting systemd-modules-load.service... Jul 2 00:45:38.135001 systemd[1]: Starting systemd-network-generator.service... Jul 2 00:45:38.135052 systemd[1]: Starting systemd-remount-fs.service... Jul 2 00:45:38.135083 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 00:45:38.135116 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 00:45:38.135146 systemd[1]: Stopped verity-setup.service. Jul 2 00:45:38.135202 systemd[1]: Mounted dev-hugepages.mount. Jul 2 00:45:38.135235 systemd[1]: Mounted dev-mqueue.mount. Jul 2 00:45:38.135265 systemd[1]: Mounted media.mount. Jul 2 00:45:38.135294 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 00:45:38.135323 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 00:45:38.135352 systemd[1]: Mounted tmp.mount. Jul 2 00:45:38.135381 systemd[1]: Finished kmod-static-nodes.service. Jul 2 00:45:38.135412 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 00:45:38.135442 systemd[1]: Finished modprobe@configfs.service. Jul 2 00:45:38.135475 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:45:38.135505 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:45:38.135534 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:45:38.135565 systemd[1]: Finished modprobe@drm.service. Jul 2 00:45:38.135594 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:45:38.135628 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:45:38.135658 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 00:45:38.135688 systemd[1]: Finished modprobe@fuse.service. Jul 2 00:45:38.135718 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:45:38.135757 systemd[1]: Finished modprobe@loop.service. Jul 2 00:45:38.135790 systemd[1]: Finished systemd-modules-load.service. Jul 2 00:45:38.135826 systemd-journald[1426]: Journal started Jul 2 00:45:38.135924 systemd-journald[1426]: Runtime Journal (/run/log/journal/ec2f4d09a03dd5d6cef3c430edc363a9) is 8.0M, max 75.4M, 67.4M free. Jul 2 00:45:33.417000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 00:45:33.620000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 00:45:33.620000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 00:45:33.620000 audit: BPF prog-id=10 op=LOAD Jul 2 00:45:33.620000 audit: BPF prog-id=10 op=UNLOAD Jul 2 00:45:33.620000 audit: BPF prog-id=11 op=LOAD Jul 2 00:45:33.620000 audit: BPF prog-id=11 op=UNLOAD Jul 2 00:45:33.791000 audit[1335]: AVC avc: denied { associate } for pid=1335 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 00:45:33.791000 audit[1335]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458c4 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1318 pid=1335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:45:33.791000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 00:45:33.794000 audit[1335]: AVC avc: denied { associate } for pid=1335 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 00:45:33.794000 audit[1335]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001459a9 a2=1ed a3=0 items=2 ppid=1318 pid=1335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:45:33.794000 audit: CWD cwd="/" Jul 2 00:45:33.794000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:45:33.794000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 00:45:33.794000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 00:45:37.692000 audit: BPF prog-id=12 op=LOAD Jul 2 00:45:37.696000 audit: BPF prog-id=3 op=UNLOAD Jul 2 00:45:37.696000 audit: BPF prog-id=13 op=LOAD Jul 2 00:45:37.698000 audit: BPF prog-id=14 op=LOAD Jul 2 00:45:37.698000 audit: BPF prog-id=4 op=UNLOAD Jul 2 00:45:37.698000 audit: BPF prog-id=5 op=UNLOAD Jul 2 00:45:37.703000 audit: BPF prog-id=15 op=LOAD Jul 2 00:45:37.703000 audit: BPF prog-id=12 op=UNLOAD Jul 2 00:45:37.705000 audit: BPF prog-id=16 op=LOAD Jul 2 00:45:37.707000 audit: BPF prog-id=17 op=LOAD Jul 2 00:45:37.707000 audit: BPF prog-id=13 op=UNLOAD Jul 2 00:45:37.707000 audit: BPF prog-id=14 op=UNLOAD Jul 2 00:45:37.709000 audit: BPF prog-id=18 op=LOAD Jul 2 00:45:37.709000 audit: BPF prog-id=15 op=UNLOAD Jul 2 00:45:37.711000 audit: BPF prog-id=19 op=LOAD Jul 2 00:45:37.713000 audit: BPF prog-id=20 op=LOAD Jul 2 00:45:37.713000 audit: BPF prog-id=16 op=UNLOAD Jul 2 00:45:37.713000 audit: BPF prog-id=17 op=UNLOAD Jul 2 00:45:37.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.141959 systemd[1]: Started systemd-journald.service. Jul 2 00:45:37.720000 audit: BPF prog-id=18 op=UNLOAD Jul 2 00:45:37.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:37.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:37.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:37.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:37.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:37.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:37.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:37.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:37.994000 audit: BPF prog-id=21 op=LOAD Jul 2 00:45:37.994000 audit: BPF prog-id=22 op=LOAD Jul 2 00:45:37.994000 audit: BPF prog-id=23 op=LOAD Jul 2 00:45:37.994000 audit: BPF prog-id=19 op=UNLOAD Jul 2 00:45:37.994000 audit: BPF prog-id=20 op=UNLOAD Jul 2 00:45:38.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.102000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.117000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 00:45:38.117000 audit[1426]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=fffff0b446d0 a2=4000 a3=1 items=0 ppid=1 pid=1426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:45:38.117000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 00:45:38.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:33.779261 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:45:37.691058 systemd[1]: Queued start job for default target multi-user.target. Jul 2 00:45:33.788840 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 00:45:38.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:37.715842 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 00:45:33.788898 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 00:45:38.142783 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 00:45:33.788964 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 00:45:38.144887 systemd[1]: Finished systemd-network-generator.service. Jul 2 00:45:33.788990 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 00:45:38.148401 systemd[1]: Finished systemd-remount-fs.service. Jul 2 00:45:33.789055 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 00:45:38.151092 systemd[1]: Reached target network-pre.target. Jul 2 00:45:33.789087 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 00:45:33.789520 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 00:45:33.789597 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 00:45:33.789632 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 00:45:33.791494 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 00:45:33.791578 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 00:45:33.791624 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 00:45:33.791664 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 00:45:33.791710 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 00:45:33.791748 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:33Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 00:45:36.804224 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:36Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:45:36.804752 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:36Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:45:38.157900 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 00:45:36.805008 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:36Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:45:38.162659 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 00:45:36.805480 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:36Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 00:45:36.805583 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:36Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 00:45:36.805718 /usr/lib/systemd/system-generators/torcx-generator[1335]: time="2024-07-02T00:45:36Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 00:45:38.169302 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 00:45:38.172482 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 00:45:38.176325 systemd[1]: Starting systemd-journal-flush.service... Jul 2 00:45:38.177955 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:45:38.185799 systemd[1]: Starting systemd-random-seed.service... Jul 2 00:45:38.187441 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:45:38.189566 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:45:38.193924 systemd[1]: Starting systemd-sysusers.service... Jul 2 00:45:38.204494 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 00:45:38.207645 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 00:45:38.216298 systemd-journald[1426]: Time spent on flushing to /var/log/journal/ec2f4d09a03dd5d6cef3c430edc363a9 is 70.905ms for 1150 entries. Jul 2 00:45:38.216298 systemd-journald[1426]: System Journal (/var/log/journal/ec2f4d09a03dd5d6cef3c430edc363a9) is 8.0M, max 195.6M, 187.6M free. Jul 2 00:45:38.329210 systemd-journald[1426]: Received client request to flush runtime journal. Jul 2 00:45:38.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.239776 systemd[1]: Finished systemd-random-seed.service. Jul 2 00:45:38.241620 systemd[1]: Reached target first-boot-complete.target. Jul 2 00:45:38.264280 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:45:38.316078 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 00:45:38.320380 systemd[1]: Starting systemd-udev-settle.service... Jul 2 00:45:38.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.331030 systemd[1]: Finished systemd-journal-flush.service. Jul 2 00:45:38.340503 udevadm[1454]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 00:45:38.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:38.345389 systemd[1]: Finished systemd-sysusers.service. Jul 2 00:45:38.349322 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 00:45:38.435402 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 00:45:38.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:39.079497 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 00:45:39.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:39.083000 audit: BPF prog-id=24 op=LOAD Jul 2 00:45:39.083000 audit: BPF prog-id=25 op=LOAD Jul 2 00:45:39.083000 audit: BPF prog-id=7 op=UNLOAD Jul 2 00:45:39.083000 audit: BPF prog-id=8 op=UNLOAD Jul 2 00:45:39.085643 systemd[1]: Starting systemd-udevd.service... Jul 2 00:45:39.123256 systemd-udevd[1457]: Using default interface naming scheme 'v252'. Jul 2 00:45:39.171489 systemd[1]: Started systemd-udevd.service. Jul 2 00:45:39.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:39.173000 audit: BPF prog-id=26 op=LOAD Jul 2 00:45:39.176119 systemd[1]: Starting systemd-networkd.service... Jul 2 00:45:39.191000 audit: BPF prog-id=27 op=LOAD Jul 2 00:45:39.191000 audit: BPF prog-id=28 op=LOAD Jul 2 00:45:39.191000 audit: BPF prog-id=29 op=LOAD Jul 2 00:45:39.194117 systemd[1]: Starting systemd-userdbd.service... Jul 2 00:45:39.255633 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 00:45:39.281729 systemd[1]: Started systemd-userdbd.service. Jul 2 00:45:39.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:39.304311 (udev-worker)[1470]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:45:39.425429 systemd-networkd[1462]: lo: Link UP Jul 2 00:45:39.425452 systemd-networkd[1462]: lo: Gained carrier Jul 2 00:45:39.426384 systemd-networkd[1462]: Enumeration completed Jul 2 00:45:39.426543 systemd[1]: Started systemd-networkd.service. Jul 2 00:45:39.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:39.430369 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 00:45:39.432343 systemd-networkd[1462]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 00:45:39.437201 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 00:45:39.438161 systemd-networkd[1462]: eth0: Link UP Jul 2 00:45:39.438476 systemd-networkd[1462]: eth0: Gained carrier Jul 2 00:45:39.446409 systemd-networkd[1462]: eth0: DHCPv4 address 172.31.20.46/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jul 2 00:45:39.538246 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1468) Jul 2 00:45:39.640622 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 00:45:39.643152 systemd[1]: Finished systemd-udev-settle.service. Jul 2 00:45:39.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:39.647066 systemd[1]: Starting lvm2-activation-early.service... Jul 2 00:45:39.699724 lvm[1575]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:45:39.735822 systemd[1]: Finished lvm2-activation-early.service. Jul 2 00:45:39.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:39.737744 systemd[1]: Reached target cryptsetup.target. Jul 2 00:45:39.741382 systemd[1]: Starting lvm2-activation.service... Jul 2 00:45:39.749709 lvm[1576]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 00:45:39.788820 systemd[1]: Finished lvm2-activation.service. Jul 2 00:45:39.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:39.790767 systemd[1]: Reached target local-fs-pre.target. Jul 2 00:45:39.792532 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 00:45:39.792715 systemd[1]: Reached target local-fs.target. Jul 2 00:45:39.794335 systemd[1]: Reached target machines.target. Jul 2 00:45:39.810962 systemd[1]: Starting ldconfig.service... Jul 2 00:45:39.813635 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:45:39.813967 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:45:39.816704 systemd[1]: Starting systemd-boot-update.service... Jul 2 00:45:39.820694 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 00:45:39.825132 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 00:45:39.829910 systemd[1]: Starting systemd-sysext.service... Jul 2 00:45:39.848797 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1578 (bootctl) Jul 2 00:45:39.851149 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 00:45:39.867050 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 00:45:39.877550 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 00:45:39.877922 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 00:45:39.907956 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 00:45:39.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:39.911263 kernel: loop0: detected capacity change from 0 to 194096 Jul 2 00:45:39.950221 systemd-fsck[1588]: fsck.fat 4.2 (2021-01-31) Jul 2 00:45:39.950221 systemd-fsck[1588]: /dev/nvme0n1p1: 236 files, 117047/258078 clusters Jul 2 00:45:39.954593 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 00:45:39.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:39.959122 systemd[1]: Mounting boot.mount... Jul 2 00:45:39.978026 systemd[1]: Mounted boot.mount. Jul 2 00:45:40.008311 systemd[1]: Finished systemd-boot-update.service. Jul 2 00:45:40.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:40.143219 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 00:45:40.180322 kernel: loop1: detected capacity change from 0 to 194096 Jul 2 00:45:40.197155 (sd-sysext)[1606]: Using extensions 'kubernetes'. Jul 2 00:45:40.198056 (sd-sysext)[1606]: Merged extensions into '/usr'. Jul 2 00:45:40.236963 systemd[1]: Mounting usr-share-oem.mount... Jul 2 00:45:40.242049 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:45:40.244646 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:45:40.248801 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:45:40.253313 systemd[1]: Starting modprobe@loop.service... Jul 2 00:45:40.255503 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:45:40.255844 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:45:40.262296 systemd[1]: Mounted usr-share-oem.mount. Jul 2 00:45:40.266903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:45:40.267323 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:45:40.267000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:40.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:40.269878 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:45:40.270196 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:45:40.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:40.271000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:40.272922 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:45:40.273229 systemd[1]: Finished modprobe@loop.service. Jul 2 00:45:40.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:40.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:40.275756 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:45:40.275983 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:45:40.280952 systemd[1]: Finished systemd-sysext.service. Jul 2 00:45:40.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:40.287352 systemd[1]: Starting ensure-sysext.service... Jul 2 00:45:40.291633 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 00:45:40.309354 systemd[1]: Reloading. Jul 2 00:45:40.330701 systemd-tmpfiles[1613]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 00:45:40.332599 systemd-tmpfiles[1613]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 00:45:40.349331 systemd-tmpfiles[1613]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 00:45:40.503162 /usr/lib/systemd/system-generators/torcx-generator[1632]: time="2024-07-02T00:45:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:45:40.509749 /usr/lib/systemd/system-generators/torcx-generator[1632]: time="2024-07-02T00:45:40Z" level=info msg="torcx already run" Jul 2 00:45:40.742628 ldconfig[1577]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 00:45:40.780207 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:45:40.780245 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:45:40.818767 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:45:40.950096 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 00:45:40.958000 audit: BPF prog-id=30 op=LOAD Jul 2 00:45:40.958000 audit: BPF prog-id=27 op=UNLOAD Jul 2 00:45:40.959000 audit: BPF prog-id=31 op=LOAD Jul 2 00:45:40.959000 audit: BPF prog-id=32 op=LOAD Jul 2 00:45:40.959000 audit: BPF prog-id=28 op=UNLOAD Jul 2 00:45:40.959000 audit: BPF prog-id=29 op=UNLOAD Jul 2 00:45:40.961000 audit: BPF prog-id=33 op=LOAD Jul 2 00:45:40.961000 audit: BPF prog-id=34 op=LOAD Jul 2 00:45:40.961000 audit: BPF prog-id=24 op=UNLOAD Jul 2 00:45:40.961000 audit: BPF prog-id=25 op=UNLOAD Jul 2 00:45:40.964000 audit: BPF prog-id=35 op=LOAD Jul 2 00:45:40.964000 audit: BPF prog-id=21 op=UNLOAD Jul 2 00:45:40.964000 audit: BPF prog-id=36 op=LOAD Jul 2 00:45:40.964000 audit: BPF prog-id=37 op=LOAD Jul 2 00:45:40.964000 audit: BPF prog-id=22 op=UNLOAD Jul 2 00:45:40.964000 audit: BPF prog-id=23 op=UNLOAD Jul 2 00:45:40.970000 audit: BPF prog-id=38 op=LOAD Jul 2 00:45:40.970000 audit: BPF prog-id=26 op=UNLOAD Jul 2 00:45:40.994525 systemd[1]: Finished ldconfig.service. Jul 2 00:45:40.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:40.996624 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 00:45:40.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.000755 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 00:45:41.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.009375 systemd[1]: Starting audit-rules.service... Jul 2 00:45:41.013785 systemd[1]: Starting clean-ca-certificates.service... Jul 2 00:45:41.018429 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 00:45:41.020000 audit: BPF prog-id=39 op=LOAD Jul 2 00:45:41.026000 audit: BPF prog-id=40 op=LOAD Jul 2 00:45:41.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.024898 systemd[1]: Starting systemd-resolved.service... Jul 2 00:45:41.029506 systemd-networkd[1462]: eth0: Gained IPv6LL Jul 2 00:45:41.056000 audit[1694]: SYSTEM_BOOT pid=1694 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.032491 systemd[1]: Starting systemd-timesyncd.service... Jul 2 00:45:41.036460 systemd[1]: Starting systemd-update-utmp.service... Jul 2 00:45:41.039142 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 00:45:41.059709 systemd[1]: Finished clean-ca-certificates.service. Jul 2 00:45:41.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.068989 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:45:41.072984 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:45:41.079357 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 00:45:41.083459 systemd[1]: Starting modprobe@loop.service... Jul 2 00:45:41.085612 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:45:41.086088 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:45:41.086586 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:45:41.096260 systemd[1]: Finished systemd-update-utmp.service. Jul 2 00:45:41.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.098903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:45:41.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.099228 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:45:41.102003 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:45:41.102333 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:45:41.102574 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:45:41.102793 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:45:41.110288 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 00:45:41.113801 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 00:45:41.117909 systemd[1]: Starting modprobe@drm.service... Jul 2 00:45:41.119530 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 00:45:41.119854 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:45:41.120138 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 00:45:41.124825 systemd[1]: Finished ensure-sysext.service. Jul 2 00:45:41.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.132132 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 00:45:41.132471 systemd[1]: Finished modprobe@loop.service. Jul 2 00:45:41.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.134912 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 00:45:41.135232 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 00:45:41.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.137289 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 00:45:41.143872 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 00:45:41.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.146515 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 00:45:41.146839 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 00:45:41.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.148659 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 00:45:41.151070 systemd[1]: Starting systemd-update-done.service... Jul 2 00:45:41.156063 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 00:45:41.156000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.156419 systemd[1]: Finished modprobe@drm.service. Jul 2 00:45:41.168497 systemd[1]: Finished systemd-update-done.service. Jul 2 00:45:41.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.244232 systemd[1]: Started systemd-timesyncd.service. Jul 2 00:45:41.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 00:45:41.246582 systemd[1]: Reached target time-set.target. Jul 2 00:45:41.247000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 00:45:41.247000 audit[1715]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc515deb0 a2=420 a3=0 items=0 ppid=1689 pid=1715 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 00:45:41.247000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 00:45:41.249658 augenrules[1715]: No rules Jul 2 00:45:41.251061 systemd[1]: Finished audit-rules.service. Jul 2 00:45:41.264403 systemd-resolved[1692]: Positive Trust Anchors: Jul 2 00:45:41.264889 systemd-resolved[1692]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 00:45:41.265047 systemd-resolved[1692]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 00:45:41.267120 systemd-timesyncd[1693]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Jul 2 00:45:41.267487 systemd-timesyncd[1693]: Initial clock synchronization to Tue 2024-07-02 00:45:41.429441 UTC. Jul 2 00:45:41.297296 systemd-resolved[1692]: Defaulting to hostname 'linux'. Jul 2 00:45:41.300419 systemd[1]: Started systemd-resolved.service. Jul 2 00:45:41.302152 systemd[1]: Reached target network.target. Jul 2 00:45:41.303660 systemd[1]: Reached target network-online.target. Jul 2 00:45:41.305242 systemd[1]: Reached target nss-lookup.target. Jul 2 00:45:41.306746 systemd[1]: Reached target sysinit.target. Jul 2 00:45:41.308310 systemd[1]: Started motdgen.path. Jul 2 00:45:41.309596 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 00:45:41.311814 systemd[1]: Started logrotate.timer. Jul 2 00:45:41.313349 systemd[1]: Started mdadm.timer. Jul 2 00:45:41.314571 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 00:45:41.316118 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 00:45:41.316154 systemd[1]: Reached target paths.target. Jul 2 00:45:41.317494 systemd[1]: Reached target timers.target. Jul 2 00:45:41.332298 systemd[1]: Listening on dbus.socket. Jul 2 00:45:41.335599 systemd[1]: Starting docker.socket... Jul 2 00:45:41.341863 systemd[1]: Listening on sshd.socket. Jul 2 00:45:41.343535 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:45:41.344413 systemd[1]: Listening on docker.socket. Jul 2 00:45:41.345981 systemd[1]: Reached target sockets.target. Jul 2 00:45:41.347482 systemd[1]: Reached target basic.target. Jul 2 00:45:41.348969 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 00:45:41.349028 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 00:45:41.351227 systemd[1]: Started amazon-ssm-agent.service. Jul 2 00:45:41.355421 systemd[1]: Starting containerd.service... Jul 2 00:45:41.358819 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 00:45:41.363800 systemd[1]: Starting dbus.service... Jul 2 00:45:41.369586 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 00:45:41.379250 systemd[1]: Starting extend-filesystems.service... Jul 2 00:45:41.380774 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 00:45:41.383492 systemd[1]: Starting kubelet.service... Jul 2 00:45:41.387533 systemd[1]: Starting motdgen.service... Jul 2 00:45:41.393098 systemd[1]: Started nvidia.service. Jul 2 00:45:41.405945 systemd[1]: Starting prepare-helm.service... Jul 2 00:45:41.418629 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 00:45:41.422852 systemd[1]: Starting sshd-keygen.service... Jul 2 00:45:41.431047 systemd[1]: Starting systemd-logind.service... Jul 2 00:45:41.432753 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 00:45:41.432890 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 00:45:41.434410 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 00:45:41.438504 systemd[1]: Starting update-engine.service... Jul 2 00:45:41.443398 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 00:45:41.485203 jq[1727]: false Jul 2 00:45:41.502965 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 00:45:41.503363 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 00:45:41.519560 extend-filesystems[1728]: Found loop1 Jul 2 00:45:41.521374 jq[1741]: true Jul 2 00:45:41.530575 extend-filesystems[1728]: Found nvme0n1 Jul 2 00:45:41.543855 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 00:45:41.557086 tar[1746]: linux-arm64/helm Jul 2 00:45:41.544212 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 00:45:41.576408 extend-filesystems[1728]: Found nvme0n1p1 Jul 2 00:45:41.581974 extend-filesystems[1728]: Found nvme0n1p2 Jul 2 00:45:41.585996 dbus-daemon[1726]: [system] SELinux support is enabled Jul 2 00:45:41.586805 extend-filesystems[1728]: Found nvme0n1p3 Jul 2 00:45:41.611207 extend-filesystems[1728]: Found usr Jul 2 00:45:41.611207 extend-filesystems[1728]: Found nvme0n1p4 Jul 2 00:45:41.611207 extend-filesystems[1728]: Found nvme0n1p6 Jul 2 00:45:41.611207 extend-filesystems[1728]: Found nvme0n1p7 Jul 2 00:45:41.611207 extend-filesystems[1728]: Found nvme0n1p9 Jul 2 00:45:41.611207 extend-filesystems[1728]: Checking size of /dev/nvme0n1p9 Jul 2 00:45:41.607558 systemd[1]: Started dbus.service. Jul 2 00:45:41.613357 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 00:45:41.613399 systemd[1]: Reached target system-config.target. Jul 2 00:45:41.621335 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 00:45:41.621379 systemd[1]: Reached target user-config.target. Jul 2 00:45:41.643710 dbus-daemon[1726]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1462 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 00:45:41.644869 jq[1757]: true Jul 2 00:45:41.652440 systemd[1]: Starting systemd-hostnamed.service... Jul 2 00:45:41.680115 extend-filesystems[1728]: Resized partition /dev/nvme0n1p9 Jul 2 00:45:41.721353 extend-filesystems[1777]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 00:45:41.769410 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 00:45:41.769817 systemd[1]: Finished motdgen.service. Jul 2 00:45:41.772421 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jul 2 00:45:41.853078 bash[1788]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:45:41.854674 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 00:45:41.865277 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jul 2 00:45:41.900622 extend-filesystems[1777]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jul 2 00:45:41.900622 extend-filesystems[1777]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 2 00:45:41.900622 extend-filesystems[1777]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jul 2 00:45:41.920360 extend-filesystems[1728]: Resized filesystem in /dev/nvme0n1p9 Jul 2 00:45:41.922337 update_engine[1740]: I0702 00:45:41.912744 1740 main.cc:92] Flatcar Update Engine starting Jul 2 00:45:41.906998 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 00:45:41.907336 systemd[1]: Finished extend-filesystems.service. Jul 2 00:45:41.928923 update_engine[1740]: I0702 00:45:41.928861 1740 update_check_scheduler.cc:74] Next update check in 9m5s Jul 2 00:45:41.929560 systemd[1]: Started update-engine.service. Jul 2 00:45:41.932764 env[1747]: time="2024-07-02T00:45:41.932664638Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 00:45:41.934153 systemd[1]: Started locksmithd.service. Jul 2 00:45:42.022698 amazon-ssm-agent[1723]: 2024/07/02 00:45:42 Failed to load instance info from vault. RegistrationKey does not exist. Jul 2 00:45:42.038275 amazon-ssm-agent[1723]: Initializing new seelog logger Jul 2 00:45:42.038503 amazon-ssm-agent[1723]: New Seelog Logger Creation Complete Jul 2 00:45:42.038595 amazon-ssm-agent[1723]: 2024/07/02 00:45:42 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:45:42.038595 amazon-ssm-agent[1723]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jul 2 00:45:42.038938 amazon-ssm-agent[1723]: 2024/07/02 00:45:42 processing appconfig overrides Jul 2 00:45:42.159006 systemd[1]: nvidia.service: Deactivated successfully. Jul 2 00:45:42.162011 env[1747]: time="2024-07-02T00:45:42.161859588Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 00:45:42.170421 systemd-logind[1738]: Watching system buttons on /dev/input/event0 (Power Button) Jul 2 00:45:42.170475 systemd-logind[1738]: Watching system buttons on /dev/input/event1 (Sleep Button) Jul 2 00:45:42.172308 systemd-logind[1738]: New seat seat0. Jul 2 00:45:42.181130 systemd[1]: Started systemd-logind.service. Jul 2 00:45:42.190017 env[1747]: time="2024-07-02T00:45:42.189926618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:45:42.193031 env[1747]: time="2024-07-02T00:45:42.192921587Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:45:42.194293 env[1747]: time="2024-07-02T00:45:42.193042714Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:45:42.194293 env[1747]: time="2024-07-02T00:45:42.193587671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:45:42.194293 env[1747]: time="2024-07-02T00:45:42.193656457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 00:45:42.194293 env[1747]: time="2024-07-02T00:45:42.193690678Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 00:45:42.194293 env[1747]: time="2024-07-02T00:45:42.193741331Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 00:45:42.194293 env[1747]: time="2024-07-02T00:45:42.194032732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:45:42.194872 env[1747]: time="2024-07-02T00:45:42.194779111Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 00:45:42.195309 env[1747]: time="2024-07-02T00:45:42.195214817Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 00:45:42.195309 env[1747]: time="2024-07-02T00:45:42.195282684Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 00:45:42.195570 env[1747]: time="2024-07-02T00:45:42.195523091Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 00:45:42.195686 env[1747]: time="2024-07-02T00:45:42.195563336Z" level=info msg="metadata content store policy set" policy=shared Jul 2 00:45:42.219600 env[1747]: time="2024-07-02T00:45:42.219498062Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 00:45:42.219793 env[1747]: time="2024-07-02T00:45:42.219606455Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 00:45:42.219793 env[1747]: time="2024-07-02T00:45:42.219665421Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 00:45:42.219908 env[1747]: time="2024-07-02T00:45:42.219761314Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 00:45:42.219908 env[1747]: time="2024-07-02T00:45:42.219835769Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 00:45:42.219908 env[1747]: time="2024-07-02T00:45:42.219871165Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 00:45:42.220090 env[1747]: time="2024-07-02T00:45:42.219931086Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 00:45:42.220690 env[1747]: time="2024-07-02T00:45:42.220632237Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 00:45:42.220797 env[1747]: time="2024-07-02T00:45:42.220712336Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 00:45:42.220797 env[1747]: time="2024-07-02T00:45:42.220751822Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 00:45:42.220905 env[1747]: time="2024-07-02T00:45:42.220811792Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 00:45:42.220905 env[1747]: time="2024-07-02T00:45:42.220843870Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 00:45:42.221246 env[1747]: time="2024-07-02T00:45:42.221198254Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 00:45:42.221524 env[1747]: time="2024-07-02T00:45:42.221471228Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 00:45:42.222005 env[1747]: time="2024-07-02T00:45:42.221958419Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 00:45:42.222109 env[1747]: time="2024-07-02T00:45:42.222069384Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 00:45:42.222197 env[1747]: time="2024-07-02T00:45:42.222119021Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 00:45:42.222276 env[1747]: time="2024-07-02T00:45:42.222248400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 00:45:42.222363 env[1747]: time="2024-07-02T00:45:42.222283075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 00:45:42.222363 env[1747]: time="2024-07-02T00:45:42.222313696Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 00:45:42.222363 env[1747]: time="2024-07-02T00:45:42.222345922Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 00:45:42.222514 env[1747]: time="2024-07-02T00:45:42.222375870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 00:45:42.222514 env[1747]: time="2024-07-02T00:45:42.222440615Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 00:45:42.222514 env[1747]: time="2024-07-02T00:45:42.222474493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 00:45:42.222514 env[1747]: time="2024-07-02T00:45:42.222503328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 00:45:42.222731 env[1747]: time="2024-07-02T00:45:42.222537096Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 00:45:42.222857 env[1747]: time="2024-07-02T00:45:42.222810352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 00:45:42.222857 env[1747]: time="2024-07-02T00:45:42.222856866Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 00:45:42.222995 env[1747]: time="2024-07-02T00:45:42.222889532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 00:45:42.222995 env[1747]: time="2024-07-02T00:45:42.222918819Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 00:45:42.222995 env[1747]: time="2024-07-02T00:45:42.222952808Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 00:45:42.222995 env[1747]: time="2024-07-02T00:45:42.222984691Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 00:45:42.223360 env[1747]: time="2024-07-02T00:45:42.223020369Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 00:45:42.223360 env[1747]: time="2024-07-02T00:45:42.223102880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 00:45:42.224180 env[1747]: time="2024-07-02T00:45:42.224061064Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 00:45:42.225304 env[1747]: time="2024-07-02T00:45:42.224200655Z" level=info msg="Connect containerd service" Jul 2 00:45:42.229943 env[1747]: time="2024-07-02T00:45:42.229871373Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 00:45:42.231044 env[1747]: time="2024-07-02T00:45:42.230984147Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:45:42.231511 env[1747]: time="2024-07-02T00:45:42.231463441Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 00:45:42.231603 env[1747]: time="2024-07-02T00:45:42.231576573Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 00:45:42.231783 systemd[1]: Started containerd.service. Jul 2 00:45:42.241217 env[1747]: time="2024-07-02T00:45:42.241132309Z" level=info msg="containerd successfully booted in 0.316817s" Jul 2 00:45:42.251224 env[1747]: time="2024-07-02T00:45:42.250970642Z" level=info msg="Start subscribing containerd event" Jul 2 00:45:42.251224 env[1747]: time="2024-07-02T00:45:42.251079599Z" level=info msg="Start recovering state" Jul 2 00:45:42.251938 env[1747]: time="2024-07-02T00:45:42.251482823Z" level=info msg="Start event monitor" Jul 2 00:45:42.251938 env[1747]: time="2024-07-02T00:45:42.251535385Z" level=info msg="Start snapshots syncer" Jul 2 00:45:42.251938 env[1747]: time="2024-07-02T00:45:42.251563264Z" level=info msg="Start cni network conf syncer for default" Jul 2 00:45:42.251938 env[1747]: time="2024-07-02T00:45:42.251583332Z" level=info msg="Start streaming server" Jul 2 00:45:42.295245 dbus-daemon[1726]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 00:45:42.295508 systemd[1]: Started systemd-hostnamed.service. Jul 2 00:45:42.300333 dbus-daemon[1726]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1766 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 00:45:42.305291 systemd[1]: Starting polkit.service... Jul 2 00:45:42.355125 polkitd[1853]: Started polkitd version 121 Jul 2 00:45:42.427867 polkitd[1853]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 00:45:42.428698 polkitd[1853]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 00:45:42.436773 polkitd[1853]: Finished loading, compiling and executing 2 rules Jul 2 00:45:42.444803 dbus-daemon[1726]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 00:45:42.445059 systemd[1]: Started polkit.service. Jul 2 00:45:42.450591 polkitd[1853]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 00:45:42.501855 systemd-resolved[1692]: System hostname changed to 'ip-172-31-20-46'. Jul 2 00:45:42.501866 systemd-hostnamed[1766]: Hostname set to (transient) Jul 2 00:45:42.627740 coreos-metadata[1725]: Jul 02 00:45:42.627 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jul 2 00:45:42.634944 coreos-metadata[1725]: Jul 02 00:45:42.634 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Jul 2 00:45:42.636831 coreos-metadata[1725]: Jul 02 00:45:42.636 INFO Fetch successful Jul 2 00:45:42.636831 coreos-metadata[1725]: Jul 02 00:45:42.636 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Jul 2 00:45:42.639278 coreos-metadata[1725]: Jul 02 00:45:42.639 INFO Fetch successful Jul 2 00:45:42.643596 unknown[1725]: wrote ssh authorized keys file for user: core Jul 2 00:45:42.703508 update-ssh-keys[1898]: Updated "/home/core/.ssh/authorized_keys" Jul 2 00:45:42.704597 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 00:45:42.939218 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO Create new startup processor Jul 2 00:45:42.939218 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [LongRunningPluginsManager] registered plugins: {} Jul 2 00:45:42.939469 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO Initializing bookkeeping folders Jul 2 00:45:42.939617 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO removing the completed state files Jul 2 00:45:42.939735 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO Initializing bookkeeping folders for long running plugins Jul 2 00:45:42.939949 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Jul 2 00:45:42.940119 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO Initializing healthcheck folders for long running plugins Jul 2 00:45:42.940294 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO Initializing locations for inventory plugin Jul 2 00:45:42.940412 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO Initializing default location for custom inventory Jul 2 00:45:42.940550 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO Initializing default location for file inventory Jul 2 00:45:42.940675 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO Initializing default location for role inventory Jul 2 00:45:42.940789 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO Init the cloudwatchlogs publisher Jul 2 00:45:42.940915 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [instanceID=i-07e2ce81fe6eb749d] Successfully loaded platform independent plugin aws:runDocument Jul 2 00:45:42.941044 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [instanceID=i-07e2ce81fe6eb749d] Successfully loaded platform independent plugin aws:configureDocker Jul 2 00:45:42.941243 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [instanceID=i-07e2ce81fe6eb749d] Successfully loaded platform independent plugin aws:runDockerAction Jul 2 00:45:42.941410 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [instanceID=i-07e2ce81fe6eb749d] Successfully loaded platform independent plugin aws:refreshAssociation Jul 2 00:45:42.941533 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [instanceID=i-07e2ce81fe6eb749d] Successfully loaded platform independent plugin aws:configurePackage Jul 2 00:45:42.941682 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [instanceID=i-07e2ce81fe6eb749d] Successfully loaded platform independent plugin aws:softwareInventory Jul 2 00:45:42.941796 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [instanceID=i-07e2ce81fe6eb749d] Successfully loaded platform independent plugin aws:runPowerShellScript Jul 2 00:45:42.941920 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [instanceID=i-07e2ce81fe6eb749d] Successfully loaded platform independent plugin aws:updateSsmAgent Jul 2 00:45:42.942034 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [instanceID=i-07e2ce81fe6eb749d] Successfully loaded platform independent plugin aws:downloadContent Jul 2 00:45:42.942150 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [instanceID=i-07e2ce81fe6eb749d] Successfully loaded platform dependent plugin aws:runShellScript Jul 2 00:45:42.942296 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Jul 2 00:45:42.942417 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO OS: linux, Arch: arm64 Jul 2 00:45:42.954999 amazon-ssm-agent[1723]: datastore file /var/lib/amazon/ssm/i-07e2ce81fe6eb749d/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Jul 2 00:45:43.038602 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessageGatewayService] Starting session document processing engine... Jul 2 00:45:43.133528 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessageGatewayService] [EngineProcessor] Starting Jul 2 00:45:43.227937 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Jul 2 00:45:43.322421 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-07e2ce81fe6eb749d, requestId: 2ea856ea-f72e-45d1-97c0-920e2270e7de Jul 2 00:45:43.417154 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessagingDeliveryService] Starting document processing engine... Jul 2 00:45:43.512157 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessagingDeliveryService] [EngineProcessor] Starting Jul 2 00:45:43.607225 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Jul 2 00:45:43.638875 locksmithd[1807]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 00:45:43.702545 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessagingDeliveryService] Starting message polling Jul 2 00:45:43.704252 systemd[1]: Started kubelet.service. Jul 2 00:45:43.798178 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessagingDeliveryService] Starting send replies to MDS Jul 2 00:45:43.858436 tar[1746]: linux-arm64/LICENSE Jul 2 00:45:43.858977 tar[1746]: linux-arm64/README.md Jul 2 00:45:43.874077 systemd[1]: Finished prepare-helm.service. Jul 2 00:45:43.893812 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [instanceID=i-07e2ce81fe6eb749d] Starting association polling Jul 2 00:45:43.989716 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Jul 2 00:45:44.085899 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessagingDeliveryService] [Association] Launching response handler Jul 2 00:45:44.182131 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Jul 2 00:45:44.278627 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Jul 2 00:45:44.375364 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Jul 2 00:45:44.472277 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [MessageGatewayService] listening reply. Jul 2 00:45:44.524526 kubelet[1930]: E0702 00:45:44.524449 1930 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:45:44.528485 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:45:44.528804 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:45:44.529287 systemd[1]: kubelet.service: Consumed 1.472s CPU time. Jul 2 00:45:44.569382 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [HealthCheck] HealthCheck reporting agent health. Jul 2 00:45:44.667272 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [OfflineService] Starting document processing engine... Jul 2 00:45:44.764699 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [OfflineService] [EngineProcessor] Starting Jul 2 00:45:44.862931 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [OfflineService] [EngineProcessor] Initial processing Jul 2 00:45:44.960926 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [OfflineService] Starting message polling Jul 2 00:45:45.059460 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [OfflineService] Starting send replies to MDS Jul 2 00:45:45.157851 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [LongRunningPluginsManager] starting long running plugin manager Jul 2 00:45:45.256371 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Jul 2 00:45:45.355202 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Jul 2 00:45:45.454203 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [StartupProcessor] Executing startup processor tasks Jul 2 00:45:45.553548 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Jul 2 00:45:45.652713 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Jul 2 00:45:45.752517 amazon-ssm-agent[1723]: 2024-07-02 00:45:42 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.5 Jul 2 00:45:45.852217 amazon-ssm-agent[1723]: 2024-07-02 00:45:43 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-07e2ce81fe6eb749d?role=subscribe&stream=input Jul 2 00:45:45.951995 amazon-ssm-agent[1723]: 2024-07-02 00:45:43 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-07e2ce81fe6eb749d?role=subscribe&stream=input Jul 2 00:45:46.052111 amazon-ssm-agent[1723]: 2024-07-02 00:45:43 INFO [MessageGatewayService] Starting receiving message from control channel Jul 2 00:45:46.152428 amazon-ssm-agent[1723]: 2024-07-02 00:45:43 INFO [MessageGatewayService] [EngineProcessor] Initial processing Jul 2 00:45:46.642968 amazon-ssm-agent[1723]: 2024-07-02 00:45:46 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Jul 2 00:45:47.322298 sshd_keygen[1748]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 00:45:47.358892 systemd[1]: Finished sshd-keygen.service. Jul 2 00:45:47.363547 systemd[1]: Starting issuegen.service... Jul 2 00:45:47.374634 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 00:45:47.375005 systemd[1]: Finished issuegen.service. Jul 2 00:45:47.379655 systemd[1]: Starting systemd-user-sessions.service... Jul 2 00:45:47.394447 systemd[1]: Finished systemd-user-sessions.service. Jul 2 00:45:47.399566 systemd[1]: Started getty@tty1.service. Jul 2 00:45:47.404009 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 00:45:47.406156 systemd[1]: Reached target getty.target. Jul 2 00:45:47.408118 systemd[1]: Reached target multi-user.target. Jul 2 00:45:47.412507 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 00:45:47.428758 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 00:45:47.429146 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 00:45:47.431277 systemd[1]: Startup finished in 1.133s (kernel) + 8.714s (initrd) + 14.127s (userspace) = 23.975s. Jul 2 00:45:50.468413 systemd[1]: Created slice system-sshd.slice. Jul 2 00:45:50.470793 systemd[1]: Started sshd@0-172.31.20.46:22-139.178.89.65:56104.service. Jul 2 00:45:50.690432 sshd[1952]: Accepted publickey for core from 139.178.89.65 port 56104 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:50.695151 sshd[1952]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:50.713368 systemd[1]: Created slice user-500.slice. Jul 2 00:45:50.715727 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 00:45:50.722299 systemd-logind[1738]: New session 1 of user core. Jul 2 00:45:50.734836 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 00:45:50.737891 systemd[1]: Starting user@500.service... Jul 2 00:45:50.745385 (systemd)[1955]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:50.922895 systemd[1955]: Queued start job for default target default.target. Jul 2 00:45:50.923945 systemd[1955]: Reached target paths.target. Jul 2 00:45:50.923998 systemd[1955]: Reached target sockets.target. Jul 2 00:45:50.924030 systemd[1955]: Reached target timers.target. Jul 2 00:45:50.924059 systemd[1955]: Reached target basic.target. Jul 2 00:45:50.924203 systemd[1955]: Reached target default.target. Jul 2 00:45:50.924273 systemd[1955]: Startup finished in 166ms. Jul 2 00:45:50.924645 systemd[1]: Started user@500.service. Jul 2 00:45:50.926678 systemd[1]: Started session-1.scope. Jul 2 00:45:51.077754 systemd[1]: Started sshd@1-172.31.20.46:22-139.178.89.65:56108.service. Jul 2 00:45:51.263320 sshd[1964]: Accepted publickey for core from 139.178.89.65 port 56108 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:51.265526 sshd[1964]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:51.274275 systemd-logind[1738]: New session 2 of user core. Jul 2 00:45:51.274332 systemd[1]: Started session-2.scope. Jul 2 00:45:51.407127 sshd[1964]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:51.413075 systemd-logind[1738]: Session 2 logged out. Waiting for processes to exit. Jul 2 00:45:51.415326 systemd[1]: sshd@1-172.31.20.46:22-139.178.89.65:56108.service: Deactivated successfully. Jul 2 00:45:51.416604 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 00:45:51.417621 systemd-logind[1738]: Removed session 2. Jul 2 00:45:51.436925 systemd[1]: Started sshd@2-172.31.20.46:22-139.178.89.65:56110.service. Jul 2 00:45:51.615248 sshd[1970]: Accepted publickey for core from 139.178.89.65 port 56110 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:51.617545 sshd[1970]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:51.626352 systemd[1]: Started session-3.scope. Jul 2 00:45:51.627281 systemd-logind[1738]: New session 3 of user core. Jul 2 00:45:51.752984 sshd[1970]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:51.757628 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 00:45:51.759616 systemd[1]: sshd@2-172.31.20.46:22-139.178.89.65:56110.service: Deactivated successfully. Jul 2 00:45:51.760364 systemd-logind[1738]: Session 3 logged out. Waiting for processes to exit. Jul 2 00:45:51.762800 systemd-logind[1738]: Removed session 3. Jul 2 00:45:51.780632 systemd[1]: Started sshd@3-172.31.20.46:22-139.178.89.65:56122.service. Jul 2 00:45:51.956078 sshd[1976]: Accepted publickey for core from 139.178.89.65 port 56122 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:51.959039 sshd[1976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:51.967330 systemd[1]: Started session-4.scope. Jul 2 00:45:51.969279 systemd-logind[1738]: New session 4 of user core. Jul 2 00:45:52.099477 sshd[1976]: pam_unix(sshd:session): session closed for user core Jul 2 00:45:52.104640 systemd-logind[1738]: Session 4 logged out. Waiting for processes to exit. Jul 2 00:45:52.105015 systemd[1]: sshd@3-172.31.20.46:22-139.178.89.65:56122.service: Deactivated successfully. Jul 2 00:45:52.106311 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 00:45:52.107726 systemd-logind[1738]: Removed session 4. Jul 2 00:45:52.126939 systemd[1]: Started sshd@4-172.31.20.46:22-139.178.89.65:56126.service. Jul 2 00:45:52.298926 sshd[1982]: Accepted publickey for core from 139.178.89.65 port 56126 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:45:52.301879 sshd[1982]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:45:52.310391 systemd[1]: Started session-5.scope. Jul 2 00:45:52.311585 systemd-logind[1738]: New session 5 of user core. Jul 2 00:45:52.436525 sudo[1985]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 00:45:52.437073 sudo[1985]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 00:45:52.483160 systemd[1]: Starting docker.service... Jul 2 00:45:52.565253 env[1995]: time="2024-07-02T00:45:52.565146559Z" level=info msg="Starting up" Jul 2 00:45:52.567519 env[1995]: time="2024-07-02T00:45:52.567470713Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 00:45:52.567688 env[1995]: time="2024-07-02T00:45:52.567659854Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 00:45:52.567868 env[1995]: time="2024-07-02T00:45:52.567836389Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 00:45:52.567983 env[1995]: time="2024-07-02T00:45:52.567956161Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 00:45:52.571044 env[1995]: time="2024-07-02T00:45:52.571000314Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 00:45:52.571256 env[1995]: time="2024-07-02T00:45:52.571226782Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 00:45:52.571525 env[1995]: time="2024-07-02T00:45:52.571492072Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 00:45:52.571664 env[1995]: time="2024-07-02T00:45:52.571636745Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 00:45:52.622165 env[1995]: time="2024-07-02T00:45:52.622117932Z" level=info msg="Loading containers: start." Jul 2 00:45:52.799210 kernel: Initializing XFRM netlink socket Jul 2 00:45:52.854484 env[1995]: time="2024-07-02T00:45:52.854418027Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 00:45:52.858838 (udev-worker)[2006]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:45:52.958602 systemd-networkd[1462]: docker0: Link UP Jul 2 00:45:52.978183 env[1995]: time="2024-07-02T00:45:52.978108479Z" level=info msg="Loading containers: done." Jul 2 00:45:53.000587 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck897698601-merged.mount: Deactivated successfully. Jul 2 00:45:53.014016 env[1995]: time="2024-07-02T00:45:53.013948261Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 00:45:53.014380 env[1995]: time="2024-07-02T00:45:53.014341772Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 00:45:53.014593 env[1995]: time="2024-07-02T00:45:53.014551669Z" level=info msg="Daemon has completed initialization" Jul 2 00:45:53.042115 systemd[1]: Started docker.service. Jul 2 00:45:53.051158 env[1995]: time="2024-07-02T00:45:53.051073556Z" level=info msg="API listen on /run/docker.sock" Jul 2 00:45:53.882319 env[1747]: time="2024-07-02T00:45:53.882224571Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 00:45:54.485512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2185391079.mount: Deactivated successfully. Jul 2 00:45:54.615527 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 00:45:54.615902 systemd[1]: Stopped kubelet.service. Jul 2 00:45:54.615975 systemd[1]: kubelet.service: Consumed 1.472s CPU time. Jul 2 00:45:54.618622 systemd[1]: Starting kubelet.service... Jul 2 00:45:55.141424 systemd[1]: Started kubelet.service. Jul 2 00:45:55.356697 kubelet[2129]: E0702 00:45:55.356638 2129 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:45:55.364465 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:45:55.364778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:45:56.840733 env[1747]: time="2024-07-02T00:45:56.840654354Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:56.844200 env[1747]: time="2024-07-02T00:45:56.844119229Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:56.848021 env[1747]: time="2024-07-02T00:45:56.847957668Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:56.851596 env[1747]: time="2024-07-02T00:45:56.851533169Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:56.853344 env[1747]: time="2024-07-02T00:45:56.853281075Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\"" Jul 2 00:45:56.871878 env[1747]: time="2024-07-02T00:45:56.871829399Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 00:45:59.268272 env[1747]: time="2024-07-02T00:45:59.268213896Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:59.280996 env[1747]: time="2024-07-02T00:45:59.280937861Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:59.286892 env[1747]: time="2024-07-02T00:45:59.286822396Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:59.295590 env[1747]: time="2024-07-02T00:45:59.295521472Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:45:59.297357 env[1747]: time="2024-07-02T00:45:59.297247771Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\"" Jul 2 00:45:59.316122 env[1747]: time="2024-07-02T00:45:59.316065906Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 00:46:00.882096 env[1747]: time="2024-07-02T00:46:00.882037482Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:00.886646 env[1747]: time="2024-07-02T00:46:00.886595744Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:00.889789 env[1747]: time="2024-07-02T00:46:00.889742091Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:00.893021 env[1747]: time="2024-07-02T00:46:00.892967879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:00.894760 env[1747]: time="2024-07-02T00:46:00.894704330Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\"" Jul 2 00:46:00.914699 env[1747]: time="2024-07-02T00:46:00.914651601Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 00:46:02.478332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3050368216.mount: Deactivated successfully. Jul 2 00:46:03.335418 env[1747]: time="2024-07-02T00:46:03.335337268Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:03.338940 env[1747]: time="2024-07-02T00:46:03.338879580Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:03.342320 env[1747]: time="2024-07-02T00:46:03.342265556Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:03.345306 env[1747]: time="2024-07-02T00:46:03.345247608Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:03.346461 env[1747]: time="2024-07-02T00:46:03.346411538Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\"" Jul 2 00:46:03.363438 env[1747]: time="2024-07-02T00:46:03.363388587Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 00:46:03.949558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2556278321.mount: Deactivated successfully. Jul 2 00:46:05.399447 env[1747]: time="2024-07-02T00:46:05.399384568Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:05.402948 env[1747]: time="2024-07-02T00:46:05.402896565Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:05.406552 env[1747]: time="2024-07-02T00:46:05.406487524Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:05.409957 env[1747]: time="2024-07-02T00:46:05.409896211Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:05.411852 env[1747]: time="2024-07-02T00:46:05.411777925Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jul 2 00:46:05.428641 env[1747]: time="2024-07-02T00:46:05.428578421Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 00:46:05.615535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 00:46:05.615860 systemd[1]: Stopped kubelet.service. Jul 2 00:46:05.618525 systemd[1]: Starting kubelet.service... Jul 2 00:46:06.095238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839747447.mount: Deactivated successfully. Jul 2 00:46:06.121783 systemd[1]: Started kubelet.service. Jul 2 00:46:06.125570 env[1747]: time="2024-07-02T00:46:06.125494185Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:06.130323 env[1747]: time="2024-07-02T00:46:06.130252359Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:06.135218 env[1747]: time="2024-07-02T00:46:06.134796457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:06.138302 env[1747]: time="2024-07-02T00:46:06.138212417Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:06.139488 env[1747]: time="2024-07-02T00:46:06.139411617Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jul 2 00:46:06.156596 env[1747]: time="2024-07-02T00:46:06.156541297Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 00:46:06.229655 kubelet[2165]: E0702 00:46:06.229570 2165 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:46:06.233958 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:46:06.234288 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:46:06.728509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount822354260.mount: Deactivated successfully. Jul 2 00:46:10.500407 env[1747]: time="2024-07-02T00:46:10.500346762Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:10.504937 env[1747]: time="2024-07-02T00:46:10.504879706Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:10.510231 env[1747]: time="2024-07-02T00:46:10.510153456Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:10.513843 env[1747]: time="2024-07-02T00:46:10.513782317Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:10.515806 env[1747]: time="2024-07-02T00:46:10.515730581Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jul 2 00:46:12.517140 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 00:46:16.365536 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 00:46:16.365911 systemd[1]: Stopped kubelet.service. Jul 2 00:46:16.368594 systemd[1]: Starting kubelet.service... Jul 2 00:46:16.671635 amazon-ssm-agent[1723]: 2024-07-02 00:46:16 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Jul 2 00:46:16.753400 systemd[1]: Started kubelet.service. Jul 2 00:46:16.843824 kubelet[2242]: E0702 00:46:16.843753 2242 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 00:46:16.847281 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 00:46:16.847618 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 00:46:18.386326 systemd[1]: Stopped kubelet.service. Jul 2 00:46:18.390944 systemd[1]: Starting kubelet.service... Jul 2 00:46:18.433709 systemd[1]: Reloading. Jul 2 00:46:18.614189 /usr/lib/systemd/system-generators/torcx-generator[2273]: time="2024-07-02T00:46:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:46:18.629394 /usr/lib/systemd/system-generators/torcx-generator[2273]: time="2024-07-02T00:46:18Z" level=info msg="torcx already run" Jul 2 00:46:18.811390 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:46:18.811636 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:46:18.851017 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:46:19.077476 systemd[1]: Started kubelet.service. Jul 2 00:46:19.083873 systemd[1]: Stopping kubelet.service... Jul 2 00:46:19.085949 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:46:19.086447 systemd[1]: Stopped kubelet.service. Jul 2 00:46:19.090511 systemd[1]: Starting kubelet.service... Jul 2 00:46:19.407088 systemd[1]: Started kubelet.service. Jul 2 00:46:19.513862 kubelet[2337]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:46:19.513862 kubelet[2337]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:46:19.513862 kubelet[2337]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:46:19.515741 kubelet[2337]: I0702 00:46:19.515645 2337 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:46:20.084383 kubelet[2337]: I0702 00:46:20.084340 2337 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:46:20.084597 kubelet[2337]: I0702 00:46:20.084575 2337 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:46:20.085016 kubelet[2337]: I0702 00:46:20.084993 2337 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:46:20.118512 kubelet[2337]: E0702 00:46:20.118460 2337 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.20.46:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:20.120341 kubelet[2337]: I0702 00:46:20.120298 2337 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:46:20.136936 kubelet[2337]: I0702 00:46:20.136876 2337 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:46:20.137542 kubelet[2337]: I0702 00:46:20.137491 2337 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:46:20.137826 kubelet[2337]: I0702 00:46:20.137547 2337 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-46","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:46:20.137997 kubelet[2337]: I0702 00:46:20.137854 2337 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:46:20.137997 kubelet[2337]: I0702 00:46:20.137878 2337 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:46:20.138135 kubelet[2337]: I0702 00:46:20.138101 2337 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:46:20.139799 kubelet[2337]: I0702 00:46:20.139755 2337 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:46:20.139799 kubelet[2337]: I0702 00:46:20.139802 2337 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:46:20.139994 kubelet[2337]: I0702 00:46:20.139894 2337 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:46:20.139994 kubelet[2337]: I0702 00:46:20.139929 2337 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:46:20.143209 kubelet[2337]: W0702 00:46:20.143078 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:20.143386 kubelet[2337]: E0702 00:46:20.143248 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:20.143551 kubelet[2337]: W0702 00:46:20.143475 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-46&limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:20.143651 kubelet[2337]: E0702 00:46:20.143569 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-46&limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:20.144013 kubelet[2337]: I0702 00:46:20.143952 2337 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 00:46:20.144628 kubelet[2337]: I0702 00:46:20.144589 2337 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:46:20.144746 kubelet[2337]: W0702 00:46:20.144695 2337 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 00:46:20.151831 kubelet[2337]: I0702 00:46:20.151793 2337 server.go:1264] "Started kubelet" Jul 2 00:46:20.169031 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 00:46:20.169204 kubelet[2337]: I0702 00:46:20.168548 2337 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:46:20.170662 kubelet[2337]: I0702 00:46:20.170629 2337 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:46:20.170878 kubelet[2337]: I0702 00:46:20.170843 2337 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:46:20.172510 kubelet[2337]: I0702 00:46:20.172415 2337 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:46:20.172812 kubelet[2337]: I0702 00:46:20.172773 2337 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:46:20.177113 kubelet[2337]: E0702 00:46:20.177063 2337 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 00:46:20.177888 kubelet[2337]: I0702 00:46:20.177842 2337 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:46:20.178703 kubelet[2337]: I0702 00:46:20.178647 2337 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:46:20.180472 kubelet[2337]: I0702 00:46:20.180426 2337 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:46:20.181998 kubelet[2337]: E0702 00:46:20.181806 2337 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.20.46:6443/api/v1/namespaces/default/events\": dial tcp 172.31.20.46:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-20-46.17de3ed22ed2e28c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-20-46,UID:ip-172-31-20-46,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-20-46,},FirstTimestamp:2024-07-02 00:46:20.151734924 +0000 UTC m=+0.729490244,LastTimestamp:2024-07-02 00:46:20.151734924 +0000 UTC m=+0.729490244,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-20-46,}" Jul 2 00:46:20.183009 kubelet[2337]: W0702 00:46:20.182916 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:20.183263 kubelet[2337]: E0702 00:46:20.183226 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:20.183533 kubelet[2337]: E0702 00:46:20.183495 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-46?timeout=10s\": dial tcp 172.31.20.46:6443: connect: connection refused" interval="200ms" Jul 2 00:46:20.184982 kubelet[2337]: I0702 00:46:20.184944 2337 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:46:20.187462 kubelet[2337]: I0702 00:46:20.187414 2337 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:46:20.187462 kubelet[2337]: I0702 00:46:20.187452 2337 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:46:20.210607 kubelet[2337]: I0702 00:46:20.210570 2337 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:46:20.210607 kubelet[2337]: I0702 00:46:20.210601 2337 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:46:20.210823 kubelet[2337]: I0702 00:46:20.210635 2337 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:46:20.213905 kubelet[2337]: I0702 00:46:20.213865 2337 policy_none.go:49] "None policy: Start" Jul 2 00:46:20.215234 kubelet[2337]: I0702 00:46:20.215201 2337 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:46:20.215466 kubelet[2337]: I0702 00:46:20.215444 2337 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:46:20.229203 systemd[1]: Created slice kubepods.slice. Jul 2 00:46:20.239150 kubelet[2337]: I0702 00:46:20.238878 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:46:20.242822 kubelet[2337]: I0702 00:46:20.240937 2337 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:46:20.242822 kubelet[2337]: I0702 00:46:20.241017 2337 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:46:20.242822 kubelet[2337]: I0702 00:46:20.241049 2337 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:46:20.242822 kubelet[2337]: E0702 00:46:20.241127 2337 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:46:20.242512 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 00:46:20.251665 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 00:46:20.257002 kubelet[2337]: W0702 00:46:20.256918 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:20.257160 kubelet[2337]: E0702 00:46:20.257015 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:20.262480 kubelet[2337]: I0702 00:46:20.262426 2337 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:46:20.262762 kubelet[2337]: I0702 00:46:20.262705 2337 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:46:20.262940 kubelet[2337]: I0702 00:46:20.262914 2337 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:46:20.271072 kubelet[2337]: E0702 00:46:20.271029 2337 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-20-46\" not found" Jul 2 00:46:20.280457 kubelet[2337]: I0702 00:46:20.280398 2337 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-46" Jul 2 00:46:20.281008 kubelet[2337]: E0702 00:46:20.280959 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.46:6443/api/v1/nodes\": dial tcp 172.31.20.46:6443: connect: connection refused" node="ip-172-31-20-46" Jul 2 00:46:20.343797 kubelet[2337]: I0702 00:46:20.342248 2337 topology_manager.go:215] "Topology Admit Handler" podUID="773166976dd8d9ec201809c47f9576f3" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-46" Jul 2 00:46:20.345500 kubelet[2337]: I0702 00:46:20.345447 2337 topology_manager.go:215] "Topology Admit Handler" podUID="80a3731ebdc2b380234b18dd42070c6c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-46" Jul 2 00:46:20.347580 kubelet[2337]: I0702 00:46:20.347542 2337 topology_manager.go:215] "Topology Admit Handler" podUID="69ae5d28a3a3a794cbf38c5a6bffadf1" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-46" Jul 2 00:46:20.360384 systemd[1]: Created slice kubepods-burstable-pod773166976dd8d9ec201809c47f9576f3.slice. Jul 2 00:46:20.373939 systemd[1]: Created slice kubepods-burstable-pod69ae5d28a3a3a794cbf38c5a6bffadf1.slice. Jul 2 00:46:20.381032 systemd[1]: Created slice kubepods-burstable-pod80a3731ebdc2b380234b18dd42070c6c.slice. Jul 2 00:46:20.382125 kubelet[2337]: I0702 00:46:20.382089 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/773166976dd8d9ec201809c47f9576f3-ca-certs\") pod \"kube-apiserver-ip-172-31-20-46\" (UID: \"773166976dd8d9ec201809c47f9576f3\") " pod="kube-system/kube-apiserver-ip-172-31-20-46" Jul 2 00:46:20.382389 kubelet[2337]: I0702 00:46:20.382360 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/773166976dd8d9ec201809c47f9576f3-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-46\" (UID: \"773166976dd8d9ec201809c47f9576f3\") " pod="kube-system/kube-apiserver-ip-172-31-20-46" Jul 2 00:46:20.382552 kubelet[2337]: I0702 00:46:20.382525 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/773166976dd8d9ec201809c47f9576f3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-46\" (UID: \"773166976dd8d9ec201809c47f9576f3\") " pod="kube-system/kube-apiserver-ip-172-31-20-46" Jul 2 00:46:20.382693 kubelet[2337]: I0702 00:46:20.382667 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80a3731ebdc2b380234b18dd42070c6c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-46\" (UID: \"80a3731ebdc2b380234b18dd42070c6c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-46" Jul 2 00:46:20.382832 kubelet[2337]: I0702 00:46:20.382804 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80a3731ebdc2b380234b18dd42070c6c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-46\" (UID: \"80a3731ebdc2b380234b18dd42070c6c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-46" Jul 2 00:46:20.382982 kubelet[2337]: I0702 00:46:20.382956 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80a3731ebdc2b380234b18dd42070c6c-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-46\" (UID: \"80a3731ebdc2b380234b18dd42070c6c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-46" Jul 2 00:46:20.383138 kubelet[2337]: I0702 00:46:20.383110 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80a3731ebdc2b380234b18dd42070c6c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-46\" (UID: \"80a3731ebdc2b380234b18dd42070c6c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-46" Jul 2 00:46:20.383320 kubelet[2337]: I0702 00:46:20.383295 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80a3731ebdc2b380234b18dd42070c6c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-46\" (UID: \"80a3731ebdc2b380234b18dd42070c6c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-46" Jul 2 00:46:20.383451 kubelet[2337]: I0702 00:46:20.383426 2337 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/69ae5d28a3a3a794cbf38c5a6bffadf1-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-46\" (UID: \"69ae5d28a3a3a794cbf38c5a6bffadf1\") " pod="kube-system/kube-scheduler-ip-172-31-20-46" Jul 2 00:46:20.385050 kubelet[2337]: E0702 00:46:20.384934 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-46?timeout=10s\": dial tcp 172.31.20.46:6443: connect: connection refused" interval="400ms" Jul 2 00:46:20.483590 kubelet[2337]: I0702 00:46:20.483536 2337 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-46" Jul 2 00:46:20.484837 kubelet[2337]: E0702 00:46:20.484795 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.46:6443/api/v1/nodes\": dial tcp 172.31.20.46:6443: connect: connection refused" node="ip-172-31-20-46" Jul 2 00:46:20.670242 env[1747]: time="2024-07-02T00:46:20.670124725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-46,Uid:773166976dd8d9ec201809c47f9576f3,Namespace:kube-system,Attempt:0,}" Jul 2 00:46:20.682962 env[1747]: time="2024-07-02T00:46:20.682866885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-46,Uid:69ae5d28a3a3a794cbf38c5a6bffadf1,Namespace:kube-system,Attempt:0,}" Jul 2 00:46:20.689773 env[1747]: time="2024-07-02T00:46:20.689707296Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-46,Uid:80a3731ebdc2b380234b18dd42070c6c,Namespace:kube-system,Attempt:0,}" Jul 2 00:46:20.785670 kubelet[2337]: E0702 00:46:20.785609 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-46?timeout=10s\": dial tcp 172.31.20.46:6443: connect: connection refused" interval="800ms" Jul 2 00:46:20.888014 kubelet[2337]: I0702 00:46:20.887689 2337 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-46" Jul 2 00:46:20.888454 kubelet[2337]: E0702 00:46:20.888397 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.46:6443/api/v1/nodes\": dial tcp 172.31.20.46:6443: connect: connection refused" node="ip-172-31-20-46" Jul 2 00:46:21.177719 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount954745003.mount: Deactivated successfully. Jul 2 00:46:21.188067 env[1747]: time="2024-07-02T00:46:21.187989374Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:21.190862 kubelet[2337]: W0702 00:46:21.190785 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.20.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-46&limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:21.191024 kubelet[2337]: E0702 00:46:21.190882 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.20.46:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-20-46&limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:21.195679 env[1747]: time="2024-07-02T00:46:21.195613798Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:21.198129 env[1747]: time="2024-07-02T00:46:21.198081919Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:21.200290 env[1747]: time="2024-07-02T00:46:21.200245966Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:21.204628 env[1747]: time="2024-07-02T00:46:21.204564484Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:21.206476 env[1747]: time="2024-07-02T00:46:21.206426440Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:21.207886 env[1747]: time="2024-07-02T00:46:21.207837358Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:21.209319 env[1747]: time="2024-07-02T00:46:21.209259692Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:21.213930 env[1747]: time="2024-07-02T00:46:21.213875956Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:21.219113 env[1747]: time="2024-07-02T00:46:21.219044827Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:21.234328 env[1747]: time="2024-07-02T00:46:21.234271769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:21.256602 env[1747]: time="2024-07-02T00:46:21.256552309Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:21.269398 kubelet[2337]: W0702 00:46:21.269161 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.20.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:21.269398 kubelet[2337]: E0702 00:46:21.269353 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.20.46:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:21.274538 env[1747]: time="2024-07-02T00:46:21.274390867Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:46:21.274804 env[1747]: time="2024-07-02T00:46:21.274722163Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:46:21.274964 env[1747]: time="2024-07-02T00:46:21.274753095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:46:21.275509 env[1747]: time="2024-07-02T00:46:21.275409419Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/52e0c616c6e467133ee483cbea48ad31f8df0b7a57d85e52f9c7a18ca0e09285 pid=2382 runtime=io.containerd.runc.v2 Jul 2 00:46:21.275771 env[1747]: time="2024-07-02T00:46:21.275658818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:46:21.276627 env[1747]: time="2024-07-02T00:46:21.276062360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:46:21.276627 env[1747]: time="2024-07-02T00:46:21.276109195Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:46:21.279434 env[1747]: time="2024-07-02T00:46:21.276860320Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4a6ac7f069293ea8b69d2bb2f29f83c2a22242878ba5ae406eddffadec0c3d9 pid=2385 runtime=io.containerd.runc.v2 Jul 2 00:46:21.304662 env[1747]: time="2024-07-02T00:46:21.304528101Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:46:21.304910 env[1747]: time="2024-07-02T00:46:21.304606985Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:46:21.304910 env[1747]: time="2024-07-02T00:46:21.304634592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:46:21.305095 env[1747]: time="2024-07-02T00:46:21.304994838Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f673e0ae2b6904e3b92e319ff2780861418b8a8f22003c81dcf50e13730c2e2 pid=2414 runtime=io.containerd.runc.v2 Jul 2 00:46:21.324536 systemd[1]: Started cri-containerd-b4a6ac7f069293ea8b69d2bb2f29f83c2a22242878ba5ae406eddffadec0c3d9.scope. Jul 2 00:46:21.338300 systemd[1]: Started cri-containerd-52e0c616c6e467133ee483cbea48ad31f8df0b7a57d85e52f9c7a18ca0e09285.scope. Jul 2 00:46:21.393036 systemd[1]: Started cri-containerd-2f673e0ae2b6904e3b92e319ff2780861418b8a8f22003c81dcf50e13730c2e2.scope. Jul 2 00:46:21.443212 env[1747]: time="2024-07-02T00:46:21.441732950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-20-46,Uid:773166976dd8d9ec201809c47f9576f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4a6ac7f069293ea8b69d2bb2f29f83c2a22242878ba5ae406eddffadec0c3d9\"" Jul 2 00:46:21.452331 env[1747]: time="2024-07-02T00:46:21.452275091Z" level=info msg="CreateContainer within sandbox \"b4a6ac7f069293ea8b69d2bb2f29f83c2a22242878ba5ae406eddffadec0c3d9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 00:46:21.467233 kubelet[2337]: W0702 00:46:21.467108 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.20.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:21.467500 kubelet[2337]: E0702 00:46:21.467425 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.20.46:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:21.489959 env[1747]: time="2024-07-02T00:46:21.489844845Z" level=info msg="CreateContainer within sandbox \"b4a6ac7f069293ea8b69d2bb2f29f83c2a22242878ba5ae406eddffadec0c3d9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"92f06c7ac52dea1b62fd3b38bed39b9f7e70c32e343adf65b7586b8c7855dbd0\"" Jul 2 00:46:21.491217 env[1747]: time="2024-07-02T00:46:21.491139514Z" level=info msg="StartContainer for \"92f06c7ac52dea1b62fd3b38bed39b9f7e70c32e343adf65b7586b8c7855dbd0\"" Jul 2 00:46:21.545636 env[1747]: time="2024-07-02T00:46:21.544836208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-20-46,Uid:69ae5d28a3a3a794cbf38c5a6bffadf1,Namespace:kube-system,Attempt:0,} returns sandbox id \"52e0c616c6e467133ee483cbea48ad31f8df0b7a57d85e52f9c7a18ca0e09285\"" Jul 2 00:46:21.550467 systemd[1]: Started cri-containerd-92f06c7ac52dea1b62fd3b38bed39b9f7e70c32e343adf65b7586b8c7855dbd0.scope. Jul 2 00:46:21.566307 env[1747]: time="2024-07-02T00:46:21.565856611Z" level=info msg="CreateContainer within sandbox \"52e0c616c6e467133ee483cbea48ad31f8df0b7a57d85e52f9c7a18ca0e09285\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 00:46:21.587004 kubelet[2337]: E0702 00:46:21.586919 2337 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.20.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-46?timeout=10s\": dial tcp 172.31.20.46:6443: connect: connection refused" interval="1.6s" Jul 2 00:46:21.590214 env[1747]: time="2024-07-02T00:46:21.590106389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-20-46,Uid:80a3731ebdc2b380234b18dd42070c6c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f673e0ae2b6904e3b92e319ff2780861418b8a8f22003c81dcf50e13730c2e2\"" Jul 2 00:46:21.602925 env[1747]: time="2024-07-02T00:46:21.602853400Z" level=info msg="CreateContainer within sandbox \"2f673e0ae2b6904e3b92e319ff2780861418b8a8f22003c81dcf50e13730c2e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 00:46:21.622106 env[1747]: time="2024-07-02T00:46:21.622028855Z" level=info msg="CreateContainer within sandbox \"52e0c616c6e467133ee483cbea48ad31f8df0b7a57d85e52f9c7a18ca0e09285\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"77773d50cd558e6b180260e393614686b69a98a36ead9113eae8addaa081332f\"" Jul 2 00:46:21.622980 env[1747]: time="2024-07-02T00:46:21.622926021Z" level=info msg="StartContainer for \"77773d50cd558e6b180260e393614686b69a98a36ead9113eae8addaa081332f\"" Jul 2 00:46:21.640641 env[1747]: time="2024-07-02T00:46:21.640561789Z" level=info msg="CreateContainer within sandbox \"2f673e0ae2b6904e3b92e319ff2780861418b8a8f22003c81dcf50e13730c2e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"fe22b334435bbac201434a12eff23d8a1392417198bd4620d85a2330c8265e81\"" Jul 2 00:46:21.641447 env[1747]: time="2024-07-02T00:46:21.641398819Z" level=info msg="StartContainer for \"fe22b334435bbac201434a12eff23d8a1392417198bd4620d85a2330c8265e81\"" Jul 2 00:46:21.675456 kubelet[2337]: W0702 00:46:21.675333 2337 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.20.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:21.675643 kubelet[2337]: E0702 00:46:21.675501 2337 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.20.46:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.20.46:6443: connect: connection refused Jul 2 00:46:21.682217 env[1747]: time="2024-07-02T00:46:21.682056841Z" level=info msg="StartContainer for \"92f06c7ac52dea1b62fd3b38bed39b9f7e70c32e343adf65b7586b8c7855dbd0\" returns successfully" Jul 2 00:46:21.696240 kubelet[2337]: I0702 00:46:21.695566 2337 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-46" Jul 2 00:46:21.696240 kubelet[2337]: E0702 00:46:21.696071 2337 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.20.46:6443/api/v1/nodes\": dial tcp 172.31.20.46:6443: connect: connection refused" node="ip-172-31-20-46" Jul 2 00:46:21.706712 systemd[1]: Started cri-containerd-fe22b334435bbac201434a12eff23d8a1392417198bd4620d85a2330c8265e81.scope. Jul 2 00:46:21.727419 systemd[1]: Started cri-containerd-77773d50cd558e6b180260e393614686b69a98a36ead9113eae8addaa081332f.scope. Jul 2 00:46:21.844443 env[1747]: time="2024-07-02T00:46:21.844348762Z" level=info msg="StartContainer for \"fe22b334435bbac201434a12eff23d8a1392417198bd4620d85a2330c8265e81\" returns successfully" Jul 2 00:46:21.861270 env[1747]: time="2024-07-02T00:46:21.861191467Z" level=info msg="StartContainer for \"77773d50cd558e6b180260e393614686b69a98a36ead9113eae8addaa081332f\" returns successfully" Jul 2 00:46:23.299079 kubelet[2337]: I0702 00:46:23.299028 2337 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-46" Jul 2 00:46:26.300617 kubelet[2337]: E0702 00:46:26.300567 2337 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-20-46\" not found" node="ip-172-31-20-46" Jul 2 00:46:26.350523 kubelet[2337]: I0702 00:46:26.350467 2337 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-46" Jul 2 00:46:26.984569 kubelet[2337]: E0702 00:46:26.984518 2337 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-20-46\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-20-46" Jul 2 00:46:27.144934 kubelet[2337]: I0702 00:46:27.144886 2337 apiserver.go:52] "Watching apiserver" Jul 2 00:46:27.179874 kubelet[2337]: I0702 00:46:27.179831 2337 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:46:27.459855 update_engine[1740]: I0702 00:46:27.459798 1740 update_attempter.cc:509] Updating boot flags... Jul 2 00:46:28.885345 systemd[1]: Reloading. Jul 2 00:46:29.027052 /usr/lib/systemd/system-generators/torcx-generator[2725]: time="2024-07-02T00:46:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 00:46:29.030295 /usr/lib/systemd/system-generators/torcx-generator[2725]: time="2024-07-02T00:46:29Z" level=info msg="torcx already run" Jul 2 00:46:29.219787 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 00:46:29.220360 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 00:46:29.262728 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 00:46:29.527700 systemd[1]: Stopping kubelet.service... Jul 2 00:46:29.536369 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 00:46:29.536771 systemd[1]: Stopped kubelet.service. Jul 2 00:46:29.536872 systemd[1]: kubelet.service: Consumed 1.488s CPU time. Jul 2 00:46:29.540636 systemd[1]: Starting kubelet.service... Jul 2 00:46:29.884197 systemd[1]: Started kubelet.service. Jul 2 00:46:30.009129 kubelet[2779]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:46:30.009674 kubelet[2779]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 00:46:30.009811 kubelet[2779]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 00:46:30.010056 kubelet[2779]: I0702 00:46:30.010004 2779 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 00:46:30.022047 kubelet[2779]: I0702 00:46:30.021984 2779 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 00:46:30.022047 kubelet[2779]: I0702 00:46:30.022030 2779 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 00:46:30.026395 kubelet[2779]: I0702 00:46:30.022416 2779 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 00:46:30.025336 sudo[2791]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 00:46:30.026048 sudo[2791]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 00:46:30.031434 kubelet[2779]: I0702 00:46:30.031381 2779 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 00:46:30.034122 kubelet[2779]: I0702 00:46:30.034065 2779 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 00:46:30.046841 kubelet[2779]: I0702 00:46:30.046780 2779 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 00:46:30.047343 kubelet[2779]: I0702 00:46:30.047280 2779 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 00:46:30.048346 kubelet[2779]: I0702 00:46:30.047339 2779 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-20-46","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 00:46:30.048579 kubelet[2779]: I0702 00:46:30.048357 2779 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 00:46:30.048579 kubelet[2779]: I0702 00:46:30.048381 2779 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 00:46:30.048579 kubelet[2779]: I0702 00:46:30.048447 2779 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:46:30.048785 kubelet[2779]: I0702 00:46:30.048636 2779 kubelet.go:400] "Attempting to sync node with API server" Jul 2 00:46:30.048785 kubelet[2779]: I0702 00:46:30.048659 2779 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 00:46:30.048785 kubelet[2779]: I0702 00:46:30.048708 2779 kubelet.go:312] "Adding apiserver pod source" Jul 2 00:46:30.048785 kubelet[2779]: I0702 00:46:30.048742 2779 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 00:46:30.085503 kubelet[2779]: I0702 00:46:30.085449 2779 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 00:46:30.085818 kubelet[2779]: I0702 00:46:30.085781 2779 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 00:46:30.086991 kubelet[2779]: I0702 00:46:30.086934 2779 server.go:1264] "Started kubelet" Jul 2 00:46:30.099133 kubelet[2779]: I0702 00:46:30.099081 2779 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 00:46:30.110315 kubelet[2779]: I0702 00:46:30.110239 2779 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 00:46:30.112252 kubelet[2779]: I0702 00:46:30.112208 2779 server.go:455] "Adding debug handlers to kubelet server" Jul 2 00:46:30.117837 kubelet[2779]: I0702 00:46:30.117747 2779 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 00:46:30.118188 kubelet[2779]: I0702 00:46:30.118130 2779 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 00:46:30.132404 kubelet[2779]: I0702 00:46:30.132354 2779 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 00:46:30.132923 kubelet[2779]: I0702 00:46:30.132885 2779 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 00:46:30.133263 kubelet[2779]: I0702 00:46:30.133227 2779 reconciler.go:26] "Reconciler: start to sync state" Jul 2 00:46:30.163358 kubelet[2779]: I0702 00:46:30.163215 2779 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 00:46:30.169093 kubelet[2779]: I0702 00:46:30.169043 2779 factory.go:221] Registration of the containerd container factory successfully Jul 2 00:46:30.169093 kubelet[2779]: I0702 00:46:30.169081 2779 factory.go:221] Registration of the systemd container factory successfully Jul 2 00:46:30.242116 kubelet[2779]: I0702 00:46:30.242077 2779 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-20-46" Jul 2 00:46:30.265586 kubelet[2779]: I0702 00:46:30.265140 2779 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-20-46" Jul 2 00:46:30.265752 kubelet[2779]: I0702 00:46:30.265665 2779 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-20-46" Jul 2 00:46:30.279686 kubelet[2779]: I0702 00:46:30.278714 2779 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 00:46:30.294702 kubelet[2779]: I0702 00:46:30.294642 2779 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 00:46:30.294845 kubelet[2779]: I0702 00:46:30.294737 2779 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 00:46:30.294845 kubelet[2779]: I0702 00:46:30.294800 2779 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 00:46:30.294980 kubelet[2779]: E0702 00:46:30.294902 2779 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 00:46:30.331507 kubelet[2779]: I0702 00:46:30.331474 2779 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 00:46:30.331742 kubelet[2779]: I0702 00:46:30.331716 2779 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 00:46:30.331882 kubelet[2779]: I0702 00:46:30.331849 2779 state_mem.go:36] "Initialized new in-memory state store" Jul 2 00:46:30.332381 kubelet[2779]: I0702 00:46:30.332352 2779 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 00:46:30.332575 kubelet[2779]: I0702 00:46:30.332530 2779 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 00:46:30.332713 kubelet[2779]: I0702 00:46:30.332693 2779 policy_none.go:49] "None policy: Start" Jul 2 00:46:30.335823 kubelet[2779]: I0702 00:46:30.335770 2779 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 00:46:30.335823 kubelet[2779]: I0702 00:46:30.335827 2779 state_mem.go:35] "Initializing new in-memory state store" Jul 2 00:46:30.336334 kubelet[2779]: I0702 00:46:30.336102 2779 state_mem.go:75] "Updated machine memory state" Jul 2 00:46:30.353366 kubelet[2779]: I0702 00:46:30.353329 2779 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 00:46:30.353825 kubelet[2779]: I0702 00:46:30.353767 2779 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 00:46:30.354819 kubelet[2779]: I0702 00:46:30.354791 2779 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 00:46:30.405359 kubelet[2779]: I0702 00:46:30.402114 2779 topology_manager.go:215] "Topology Admit Handler" podUID="773166976dd8d9ec201809c47f9576f3" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-20-46" Jul 2 00:46:30.405359 kubelet[2779]: I0702 00:46:30.402363 2779 topology_manager.go:215] "Topology Admit Handler" podUID="80a3731ebdc2b380234b18dd42070c6c" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-20-46" Jul 2 00:46:30.405359 kubelet[2779]: I0702 00:46:30.402491 2779 topology_manager.go:215] "Topology Admit Handler" podUID="69ae5d28a3a3a794cbf38c5a6bffadf1" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-20-46" Jul 2 00:46:30.434520 kubelet[2779]: I0702 00:46:30.434384 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/80a3731ebdc2b380234b18dd42070c6c-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-20-46\" (UID: \"80a3731ebdc2b380234b18dd42070c6c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-46" Jul 2 00:46:30.434663 kubelet[2779]: I0702 00:46:30.434536 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/69ae5d28a3a3a794cbf38c5a6bffadf1-kubeconfig\") pod \"kube-scheduler-ip-172-31-20-46\" (UID: \"69ae5d28a3a3a794cbf38c5a6bffadf1\") " pod="kube-system/kube-scheduler-ip-172-31-20-46" Jul 2 00:46:30.434663 kubelet[2779]: I0702 00:46:30.434634 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/773166976dd8d9ec201809c47f9576f3-ca-certs\") pod \"kube-apiserver-ip-172-31-20-46\" (UID: \"773166976dd8d9ec201809c47f9576f3\") " pod="kube-system/kube-apiserver-ip-172-31-20-46" Jul 2 00:46:30.434806 kubelet[2779]: I0702 00:46:30.434724 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/773166976dd8d9ec201809c47f9576f3-k8s-certs\") pod \"kube-apiserver-ip-172-31-20-46\" (UID: \"773166976dd8d9ec201809c47f9576f3\") " pod="kube-system/kube-apiserver-ip-172-31-20-46" Jul 2 00:46:30.434869 kubelet[2779]: I0702 00:46:30.434802 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/773166976dd8d9ec201809c47f9576f3-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-20-46\" (UID: \"773166976dd8d9ec201809c47f9576f3\") " pod="kube-system/kube-apiserver-ip-172-31-20-46" Jul 2 00:46:30.434869 kubelet[2779]: I0702 00:46:30.434855 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80a3731ebdc2b380234b18dd42070c6c-ca-certs\") pod \"kube-controller-manager-ip-172-31-20-46\" (UID: \"80a3731ebdc2b380234b18dd42070c6c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-46" Jul 2 00:46:30.434994 kubelet[2779]: I0702 00:46:30.434909 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80a3731ebdc2b380234b18dd42070c6c-k8s-certs\") pod \"kube-controller-manager-ip-172-31-20-46\" (UID: \"80a3731ebdc2b380234b18dd42070c6c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-46" Jul 2 00:46:30.434994 kubelet[2779]: I0702 00:46:30.434948 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/80a3731ebdc2b380234b18dd42070c6c-kubeconfig\") pod \"kube-controller-manager-ip-172-31-20-46\" (UID: \"80a3731ebdc2b380234b18dd42070c6c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-46" Jul 2 00:46:30.435115 kubelet[2779]: I0702 00:46:30.434996 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80a3731ebdc2b380234b18dd42070c6c-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-20-46\" (UID: \"80a3731ebdc2b380234b18dd42070c6c\") " pod="kube-system/kube-controller-manager-ip-172-31-20-46" Jul 2 00:46:31.049584 kubelet[2779]: I0702 00:46:31.049521 2779 apiserver.go:52] "Watching apiserver" Jul 2 00:46:31.078694 sudo[2791]: pam_unix(sudo:session): session closed for user root Jul 2 00:46:31.133912 kubelet[2779]: I0702 00:46:31.133865 2779 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 00:46:31.368986 kubelet[2779]: E0702 00:46:31.368940 2779 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-20-46\" already exists" pod="kube-system/kube-apiserver-ip-172-31-20-46" Jul 2 00:46:31.378824 kubelet[2779]: I0702 00:46:31.378750 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-20-46" podStartSLOduration=1.378730949 podStartE2EDuration="1.378730949s" podCreationTimestamp="2024-07-02 00:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:46:31.378431624 +0000 UTC m=+1.484645742" watchObservedRunningTime="2024-07-02 00:46:31.378730949 +0000 UTC m=+1.484945043" Jul 2 00:46:31.393324 kubelet[2779]: I0702 00:46:31.393215 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-20-46" podStartSLOduration=1.3931918429999999 podStartE2EDuration="1.393191843s" podCreationTimestamp="2024-07-02 00:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:46:31.390419831 +0000 UTC m=+1.496633961" watchObservedRunningTime="2024-07-02 00:46:31.393191843 +0000 UTC m=+1.499405985" Jul 2 00:46:31.407153 kubelet[2779]: I0702 00:46:31.407041 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-20-46" podStartSLOduration=1.407017889 podStartE2EDuration="1.407017889s" podCreationTimestamp="2024-07-02 00:46:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:46:31.405502287 +0000 UTC m=+1.511716429" watchObservedRunningTime="2024-07-02 00:46:31.407017889 +0000 UTC m=+1.513232007" Jul 2 00:46:33.890494 sudo[1985]: pam_unix(sudo:session): session closed for user root Jul 2 00:46:33.913235 sshd[1982]: pam_unix(sshd:session): session closed for user core Jul 2 00:46:33.917998 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 00:46:33.918356 systemd[1]: session-5.scope: Consumed 11.633s CPU time. Jul 2 00:46:33.919847 systemd[1]: sshd@4-172.31.20.46:22-139.178.89.65:56126.service: Deactivated successfully. Jul 2 00:46:33.921451 systemd-logind[1738]: Session 5 logged out. Waiting for processes to exit. Jul 2 00:46:33.923052 systemd-logind[1738]: Removed session 5. Jul 2 00:46:42.063552 kubelet[2779]: I0702 00:46:42.063471 2779 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 00:46:42.065576 env[1747]: time="2024-07-02T00:46:42.065502803Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 00:46:42.066208 kubelet[2779]: I0702 00:46:42.066000 2779 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 00:46:42.894126 kubelet[2779]: I0702 00:46:42.894076 2779 topology_manager.go:215] "Topology Admit Handler" podUID="fa93e29b-246b-424c-b461-f7dd20f7f85c" podNamespace="kube-system" podName="kube-proxy-prsjb" Jul 2 00:46:42.905322 systemd[1]: Created slice kubepods-besteffort-podfa93e29b_246b_424c_b461_f7dd20f7f85c.slice. Jul 2 00:46:42.913909 kubelet[2779]: I0702 00:46:42.913859 2779 topology_manager.go:215] "Topology Admit Handler" podUID="215566c9-7640-4066-a908-52d8ee593c22" podNamespace="kube-system" podName="cilium-p8tf8" Jul 2 00:46:42.926453 systemd[1]: Created slice kubepods-burstable-pod215566c9_7640_4066_a908_52d8ee593c22.slice. Jul 2 00:46:43.004091 kubelet[2779]: I0702 00:46:43.004041 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-lib-modules\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.004285 kubelet[2779]: I0702 00:46:43.004100 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgmz8\" (UniqueName: \"kubernetes.io/projected/fa93e29b-246b-424c-b461-f7dd20f7f85c-kube-api-access-rgmz8\") pod \"kube-proxy-prsjb\" (UID: \"fa93e29b-246b-424c-b461-f7dd20f7f85c\") " pod="kube-system/kube-proxy-prsjb" Jul 2 00:46:43.004285 kubelet[2779]: I0702 00:46:43.004147 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-bpf-maps\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.004285 kubelet[2779]: I0702 00:46:43.004213 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-host-proc-sys-net\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.004285 kubelet[2779]: I0702 00:46:43.004252 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-host-proc-sys-kernel\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.004541 kubelet[2779]: I0702 00:46:43.004291 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fa93e29b-246b-424c-b461-f7dd20f7f85c-lib-modules\") pod \"kube-proxy-prsjb\" (UID: \"fa93e29b-246b-424c-b461-f7dd20f7f85c\") " pod="kube-system/kube-proxy-prsjb" Jul 2 00:46:43.004541 kubelet[2779]: I0702 00:46:43.004329 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2v5x\" (UniqueName: \"kubernetes.io/projected/215566c9-7640-4066-a908-52d8ee593c22-kube-api-access-t2v5x\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.004541 kubelet[2779]: I0702 00:46:43.004363 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fa93e29b-246b-424c-b461-f7dd20f7f85c-kube-proxy\") pod \"kube-proxy-prsjb\" (UID: \"fa93e29b-246b-424c-b461-f7dd20f7f85c\") " pod="kube-system/kube-proxy-prsjb" Jul 2 00:46:43.004541 kubelet[2779]: I0702 00:46:43.004404 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fa93e29b-246b-424c-b461-f7dd20f7f85c-xtables-lock\") pod \"kube-proxy-prsjb\" (UID: \"fa93e29b-246b-424c-b461-f7dd20f7f85c\") " pod="kube-system/kube-proxy-prsjb" Jul 2 00:46:43.004541 kubelet[2779]: I0702 00:46:43.004439 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-cni-path\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.004541 kubelet[2779]: I0702 00:46:43.004476 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-etc-cni-netd\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.004902 kubelet[2779]: I0702 00:46:43.004511 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/215566c9-7640-4066-a908-52d8ee593c22-cilium-config-path\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.004902 kubelet[2779]: I0702 00:46:43.004551 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-cilium-run\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.004902 kubelet[2779]: I0702 00:46:43.004584 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-hostproc\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.004902 kubelet[2779]: I0702 00:46:43.004618 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/215566c9-7640-4066-a908-52d8ee593c22-clustermesh-secrets\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.004902 kubelet[2779]: I0702 00:46:43.004684 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-xtables-lock\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.004902 kubelet[2779]: I0702 00:46:43.004731 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-cilium-cgroup\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.005286 kubelet[2779]: I0702 00:46:43.004783 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/215566c9-7640-4066-a908-52d8ee593c22-hubble-tls\") pod \"cilium-p8tf8\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " pod="kube-system/cilium-p8tf8" Jul 2 00:46:43.075374 kubelet[2779]: I0702 00:46:43.075325 2779 topology_manager.go:215] "Topology Admit Handler" podUID="639369f2-44d1-4d90-9c4a-080c7e8644a7" podNamespace="kube-system" podName="cilium-operator-599987898-7zq8z" Jul 2 00:46:43.086034 systemd[1]: Created slice kubepods-besteffort-pod639369f2_44d1_4d90_9c4a_080c7e8644a7.slice. Jul 2 00:46:43.105895 kubelet[2779]: I0702 00:46:43.105844 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/639369f2-44d1-4d90-9c4a-080c7e8644a7-cilium-config-path\") pod \"cilium-operator-599987898-7zq8z\" (UID: \"639369f2-44d1-4d90-9c4a-080c7e8644a7\") " pod="kube-system/cilium-operator-599987898-7zq8z" Jul 2 00:46:43.106279 kubelet[2779]: I0702 00:46:43.106243 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7grm\" (UniqueName: \"kubernetes.io/projected/639369f2-44d1-4d90-9c4a-080c7e8644a7-kube-api-access-m7grm\") pod \"cilium-operator-599987898-7zq8z\" (UID: \"639369f2-44d1-4d90-9c4a-080c7e8644a7\") " pod="kube-system/cilium-operator-599987898-7zq8z" Jul 2 00:46:43.230758 env[1747]: time="2024-07-02T00:46:43.228072210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prsjb,Uid:fa93e29b-246b-424c-b461-f7dd20f7f85c,Namespace:kube-system,Attempt:0,}" Jul 2 00:46:43.239554 env[1747]: time="2024-07-02T00:46:43.238228973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p8tf8,Uid:215566c9-7640-4066-a908-52d8ee593c22,Namespace:kube-system,Attempt:0,}" Jul 2 00:46:43.297703 env[1747]: time="2024-07-02T00:46:43.297573368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:46:43.297900 env[1747]: time="2024-07-02T00:46:43.297729694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:46:43.297900 env[1747]: time="2024-07-02T00:46:43.297805589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:46:43.300226 env[1747]: time="2024-07-02T00:46:43.298254371Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c0c11c6b6285f05893bd766c3a6fce5ea8a1d88abc653fa09e7c4622b8df961 pid=2861 runtime=io.containerd.runc.v2 Jul 2 00:46:43.338238 env[1747]: time="2024-07-02T00:46:43.334469336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:46:43.338238 env[1747]: time="2024-07-02T00:46:43.334628411Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:46:43.338238 env[1747]: time="2024-07-02T00:46:43.334692137Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:46:43.338938 env[1747]: time="2024-07-02T00:46:43.338805179Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329 pid=2878 runtime=io.containerd.runc.v2 Jul 2 00:46:43.377543 systemd[1]: Started cri-containerd-6c0c11c6b6285f05893bd766c3a6fce5ea8a1d88abc653fa09e7c4622b8df961.scope. Jul 2 00:46:43.394892 env[1747]: time="2024-07-02T00:46:43.394822352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7zq8z,Uid:639369f2-44d1-4d90-9c4a-080c7e8644a7,Namespace:kube-system,Attempt:0,}" Jul 2 00:46:43.407845 systemd[1]: Started cri-containerd-c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329.scope. Jul 2 00:46:43.440399 env[1747]: time="2024-07-02T00:46:43.440285141Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:46:43.440732 env[1747]: time="2024-07-02T00:46:43.440654667Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:46:43.440907 env[1747]: time="2024-07-02T00:46:43.440851269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:46:43.441426 env[1747]: time="2024-07-02T00:46:43.441361424Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165 pid=2918 runtime=io.containerd.runc.v2 Jul 2 00:46:43.472487 systemd[1]: Started cri-containerd-77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165.scope. Jul 2 00:46:43.497376 env[1747]: time="2024-07-02T00:46:43.497138754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-p8tf8,Uid:215566c9-7640-4066-a908-52d8ee593c22,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\"" Jul 2 00:46:43.501820 env[1747]: time="2024-07-02T00:46:43.501754055Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 00:46:43.569864 env[1747]: time="2024-07-02T00:46:43.569807903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-prsjb,Uid:fa93e29b-246b-424c-b461-f7dd20f7f85c,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c0c11c6b6285f05893bd766c3a6fce5ea8a1d88abc653fa09e7c4622b8df961\"" Jul 2 00:46:43.580952 env[1747]: time="2024-07-02T00:46:43.580873798Z" level=info msg="CreateContainer within sandbox \"6c0c11c6b6285f05893bd766c3a6fce5ea8a1d88abc653fa09e7c4622b8df961\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 00:46:43.609661 env[1747]: time="2024-07-02T00:46:43.609594998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-7zq8z,Uid:639369f2-44d1-4d90-9c4a-080c7e8644a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165\"" Jul 2 00:46:43.618301 env[1747]: time="2024-07-02T00:46:43.618203846Z" level=info msg="CreateContainer within sandbox \"6c0c11c6b6285f05893bd766c3a6fce5ea8a1d88abc653fa09e7c4622b8df961\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b19fcdb183e8361cc4c7bf018edcf298fcee2680ff025a4b5a1d7d341aadbe34\"" Jul 2 00:46:43.620716 env[1747]: time="2024-07-02T00:46:43.619486512Z" level=info msg="StartContainer for \"b19fcdb183e8361cc4c7bf018edcf298fcee2680ff025a4b5a1d7d341aadbe34\"" Jul 2 00:46:43.657431 systemd[1]: Started cri-containerd-b19fcdb183e8361cc4c7bf018edcf298fcee2680ff025a4b5a1d7d341aadbe34.scope. Jul 2 00:46:43.732457 env[1747]: time="2024-07-02T00:46:43.732388657Z" level=info msg="StartContainer for \"b19fcdb183e8361cc4c7bf018edcf298fcee2680ff025a4b5a1d7d341aadbe34\" returns successfully" Jul 2 00:46:44.401155 kubelet[2779]: I0702 00:46:44.400991 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-prsjb" podStartSLOduration=2.4009682469999998 podStartE2EDuration="2.400968247s" podCreationTimestamp="2024-07-02 00:46:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:46:44.400156135 +0000 UTC m=+14.506370253" watchObservedRunningTime="2024-07-02 00:46:44.400968247 +0000 UTC m=+14.507182377" Jul 2 00:46:50.011458 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1264917272.mount: Deactivated successfully. Jul 2 00:46:54.166076 env[1747]: time="2024-07-02T00:46:54.166012401Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:54.169696 env[1747]: time="2024-07-02T00:46:54.169645233Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:54.172955 env[1747]: time="2024-07-02T00:46:54.172905861Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:54.174354 env[1747]: time="2024-07-02T00:46:54.174304902Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 2 00:46:54.179961 env[1747]: time="2024-07-02T00:46:54.179891452Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 00:46:54.184413 env[1747]: time="2024-07-02T00:46:54.184355256Z" level=info msg="CreateContainer within sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:46:54.205525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount405505349.mount: Deactivated successfully. Jul 2 00:46:54.224917 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426446025.mount: Deactivated successfully. Jul 2 00:46:54.236829 env[1747]: time="2024-07-02T00:46:54.236765127Z" level=info msg="CreateContainer within sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f\"" Jul 2 00:46:54.239727 env[1747]: time="2024-07-02T00:46:54.239560664Z" level=info msg="StartContainer for \"b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f\"" Jul 2 00:46:54.272517 systemd[1]: Started cri-containerd-b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f.scope. Jul 2 00:46:54.338754 env[1747]: time="2024-07-02T00:46:54.338639491Z" level=info msg="StartContainer for \"b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f\" returns successfully" Jul 2 00:46:54.356447 systemd[1]: cri-containerd-b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f.scope: Deactivated successfully. Jul 2 00:46:55.204493 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f-rootfs.mount: Deactivated successfully. Jul 2 00:46:55.277680 env[1747]: time="2024-07-02T00:46:55.277608397Z" level=info msg="shim disconnected" id=b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f Jul 2 00:46:55.277680 env[1747]: time="2024-07-02T00:46:55.277675097Z" level=warning msg="cleaning up after shim disconnected" id=b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f namespace=k8s.io Jul 2 00:46:55.278390 env[1747]: time="2024-07-02T00:46:55.277697731Z" level=info msg="cleaning up dead shim" Jul 2 00:46:55.292549 env[1747]: time="2024-07-02T00:46:55.292476867Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3186 runtime=io.containerd.runc.v2\n" Jul 2 00:46:55.433773 env[1747]: time="2024-07-02T00:46:55.433690624Z" level=info msg="CreateContainer within sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:46:55.477913 env[1747]: time="2024-07-02T00:46:55.477257257Z" level=info msg="CreateContainer within sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a\"" Jul 2 00:46:55.479712 env[1747]: time="2024-07-02T00:46:55.479361774Z" level=info msg="StartContainer for \"ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a\"" Jul 2 00:46:55.535552 systemd[1]: Started cri-containerd-ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a.scope. Jul 2 00:46:55.624261 env[1747]: time="2024-07-02T00:46:55.624158686Z" level=info msg="StartContainer for \"ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a\" returns successfully" Jul 2 00:46:55.640507 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 00:46:55.641466 systemd[1]: Stopped systemd-sysctl.service. Jul 2 00:46:55.642119 systemd[1]: Stopping systemd-sysctl.service... Jul 2 00:46:55.649827 systemd[1]: Starting systemd-sysctl.service... Jul 2 00:46:55.650618 systemd[1]: cri-containerd-ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a.scope: Deactivated successfully. Jul 2 00:46:55.673029 systemd[1]: Finished systemd-sysctl.service. Jul 2 00:46:55.711696 env[1747]: time="2024-07-02T00:46:55.711630436Z" level=info msg="shim disconnected" id=ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a Jul 2 00:46:55.712093 env[1747]: time="2024-07-02T00:46:55.712013152Z" level=warning msg="cleaning up after shim disconnected" id=ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a namespace=k8s.io Jul 2 00:46:55.712289 env[1747]: time="2024-07-02T00:46:55.712259636Z" level=info msg="cleaning up dead shim" Jul 2 00:46:55.727465 env[1747]: time="2024-07-02T00:46:55.727410844Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3255 runtime=io.containerd.runc.v2\n" Jul 2 00:46:56.199569 systemd[1]: run-containerd-runc-k8s.io-ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a-runc.Y55ohv.mount: Deactivated successfully. Jul 2 00:46:56.199771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a-rootfs.mount: Deactivated successfully. Jul 2 00:46:56.442203 env[1747]: time="2024-07-02T00:46:56.440352515Z" level=info msg="CreateContainer within sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:46:56.468758 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount880870032.mount: Deactivated successfully. Jul 2 00:46:56.483599 env[1747]: time="2024-07-02T00:46:56.483529349Z" level=info msg="CreateContainer within sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf\"" Jul 2 00:46:56.484640 env[1747]: time="2024-07-02T00:46:56.484591188Z" level=info msg="StartContainer for \"e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf\"" Jul 2 00:46:56.562564 systemd[1]: Started cri-containerd-e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf.scope. Jul 2 00:46:56.652313 systemd[1]: cri-containerd-e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf.scope: Deactivated successfully. Jul 2 00:46:56.657270 env[1747]: time="2024-07-02T00:46:56.657196335Z" level=info msg="StartContainer for \"e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf\" returns successfully" Jul 2 00:46:56.809590 env[1747]: time="2024-07-02T00:46:56.809440634Z" level=info msg="shim disconnected" id=e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf Jul 2 00:46:56.809892 env[1747]: time="2024-07-02T00:46:56.809856220Z" level=warning msg="cleaning up after shim disconnected" id=e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf namespace=k8s.io Jul 2 00:46:56.810058 env[1747]: time="2024-07-02T00:46:56.810029559Z" level=info msg="cleaning up dead shim" Jul 2 00:46:56.831504 env[1747]: time="2024-07-02T00:46:56.831439945Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3315 runtime=io.containerd.runc.v2\n" Jul 2 00:46:57.041324 env[1747]: time="2024-07-02T00:46:57.041270766Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:57.046363 env[1747]: time="2024-07-02T00:46:57.046298021Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:57.050659 env[1747]: time="2024-07-02T00:46:57.050593686Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 00:46:57.051983 env[1747]: time="2024-07-02T00:46:57.051916756Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 2 00:46:57.058981 env[1747]: time="2024-07-02T00:46:57.058892943Z" level=info msg="CreateContainer within sandbox \"77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 00:46:57.087236 env[1747]: time="2024-07-02T00:46:57.087055433Z" level=info msg="CreateContainer within sandbox \"77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\"" Jul 2 00:46:57.089142 env[1747]: time="2024-07-02T00:46:57.089088635Z" level=info msg="StartContainer for \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\"" Jul 2 00:46:57.118530 systemd[1]: Started cri-containerd-f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a.scope. Jul 2 00:46:57.183817 env[1747]: time="2024-07-02T00:46:57.183750153Z" level=info msg="StartContainer for \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\" returns successfully" Jul 2 00:46:57.201318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf-rootfs.mount: Deactivated successfully. Jul 2 00:46:57.442324 env[1747]: time="2024-07-02T00:46:57.442262279Z" level=info msg="CreateContainer within sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:46:57.472872 env[1747]: time="2024-07-02T00:46:57.472808620Z" level=info msg="CreateContainer within sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c\"" Jul 2 00:46:57.474079 env[1747]: time="2024-07-02T00:46:57.474008706Z" level=info msg="StartContainer for \"bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c\"" Jul 2 00:46:57.538187 systemd[1]: Started cri-containerd-bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c.scope. Jul 2 00:46:57.627737 env[1747]: time="2024-07-02T00:46:57.627670685Z" level=info msg="StartContainer for \"bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c\" returns successfully" Jul 2 00:46:57.630778 systemd[1]: cri-containerd-bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c.scope: Deactivated successfully. Jul 2 00:46:57.720902 env[1747]: time="2024-07-02T00:46:57.720726588Z" level=info msg="shim disconnected" id=bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c Jul 2 00:46:57.720902 env[1747]: time="2024-07-02T00:46:57.720800248Z" level=warning msg="cleaning up after shim disconnected" id=bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c namespace=k8s.io Jul 2 00:46:57.720902 env[1747]: time="2024-07-02T00:46:57.720822965Z" level=info msg="cleaning up dead shim" Jul 2 00:46:57.741940 env[1747]: time="2024-07-02T00:46:57.741871103Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:46:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3408 runtime=io.containerd.runc.v2\n" Jul 2 00:46:58.203275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c-rootfs.mount: Deactivated successfully. Jul 2 00:46:58.460622 env[1747]: time="2024-07-02T00:46:58.460459758Z" level=info msg="CreateContainer within sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:46:58.497249 env[1747]: time="2024-07-02T00:46:58.497145680Z" level=info msg="CreateContainer within sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\"" Jul 2 00:46:58.498405 env[1747]: time="2024-07-02T00:46:58.498352593Z" level=info msg="StartContainer for \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\"" Jul 2 00:46:58.559246 systemd[1]: Started cri-containerd-f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda.scope. Jul 2 00:46:58.637286 kubelet[2779]: I0702 00:46:58.637182 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-7zq8z" podStartSLOduration=2.194605299 podStartE2EDuration="15.63712782s" podCreationTimestamp="2024-07-02 00:46:43 +0000 UTC" firstStartedPulling="2024-07-02 00:46:43.611506582 +0000 UTC m=+13.717720676" lastFinishedPulling="2024-07-02 00:46:57.054029115 +0000 UTC m=+27.160243197" observedRunningTime="2024-07-02 00:46:57.585761273 +0000 UTC m=+27.691975403" watchObservedRunningTime="2024-07-02 00:46:58.63712782 +0000 UTC m=+28.743341926" Jul 2 00:46:58.718439 env[1747]: time="2024-07-02T00:46:58.718304312Z" level=info msg="StartContainer for \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\" returns successfully" Jul 2 00:46:59.058212 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 00:46:59.114196 kubelet[2779]: I0702 00:46:59.114115 2779 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 00:46:59.223084 kubelet[2779]: I0702 00:46:59.223019 2779 topology_manager.go:215] "Topology Admit Handler" podUID="110fb833-3324-46e3-8153-7ac355e3f002" podNamespace="kube-system" podName="coredns-7db6d8ff4d-btwbc" Jul 2 00:46:59.233454 systemd[1]: Created slice kubepods-burstable-pod110fb833_3324_46e3_8153_7ac355e3f002.slice. Jul 2 00:46:59.250536 kubelet[2779]: I0702 00:46:59.250477 2779 topology_manager.go:215] "Topology Admit Handler" podUID="8888dfef-a6c5-44d0-8ca5-455c0955a733" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tncwx" Jul 2 00:46:59.260726 systemd[1]: Created slice kubepods-burstable-pod8888dfef_a6c5_44d0_8ca5_455c0955a733.slice. Jul 2 00:46:59.279703 kubelet[2779]: W0702 00:46:59.279633 2779 reflector.go:547] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-20-46" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-46' and this object Jul 2 00:46:59.279703 kubelet[2779]: E0702 00:46:59.279702 2779 reflector.go:150] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ip-172-31-20-46" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ip-172-31-20-46' and this object Jul 2 00:46:59.342009 kubelet[2779]: I0702 00:46:59.341945 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhcxw\" (UniqueName: \"kubernetes.io/projected/110fb833-3324-46e3-8153-7ac355e3f002-kube-api-access-xhcxw\") pod \"coredns-7db6d8ff4d-btwbc\" (UID: \"110fb833-3324-46e3-8153-7ac355e3f002\") " pod="kube-system/coredns-7db6d8ff4d-btwbc" Jul 2 00:46:59.342268 kubelet[2779]: I0702 00:46:59.342026 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8888dfef-a6c5-44d0-8ca5-455c0955a733-config-volume\") pod \"coredns-7db6d8ff4d-tncwx\" (UID: \"8888dfef-a6c5-44d0-8ca5-455c0955a733\") " pod="kube-system/coredns-7db6d8ff4d-tncwx" Jul 2 00:46:59.342268 kubelet[2779]: I0702 00:46:59.342066 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2g45\" (UniqueName: \"kubernetes.io/projected/8888dfef-a6c5-44d0-8ca5-455c0955a733-kube-api-access-s2g45\") pod \"coredns-7db6d8ff4d-tncwx\" (UID: \"8888dfef-a6c5-44d0-8ca5-455c0955a733\") " pod="kube-system/coredns-7db6d8ff4d-tncwx" Jul 2 00:46:59.342268 kubelet[2779]: I0702 00:46:59.342103 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/110fb833-3324-46e3-8153-7ac355e3f002-config-volume\") pod \"coredns-7db6d8ff4d-btwbc\" (UID: \"110fb833-3324-46e3-8153-7ac355e3f002\") " pod="kube-system/coredns-7db6d8ff4d-btwbc" Jul 2 00:46:59.870225 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Jul 2 00:47:00.443914 kubelet[2779]: E0702 00:47:00.443854 2779 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:47:00.445530 kubelet[2779]: E0702 00:47:00.443957 2779 configmap.go:199] Couldn't get configMap kube-system/coredns: failed to sync configmap cache: timed out waiting for the condition Jul 2 00:47:00.445530 kubelet[2779]: E0702 00:47:00.444364 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/110fb833-3324-46e3-8153-7ac355e3f002-config-volume podName:110fb833-3324-46e3-8153-7ac355e3f002 nodeName:}" failed. No retries permitted until 2024-07-02 00:47:00.944020201 +0000 UTC m=+31.050234295 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/110fb833-3324-46e3-8153-7ac355e3f002-config-volume") pod "coredns-7db6d8ff4d-btwbc" (UID: "110fb833-3324-46e3-8153-7ac355e3f002") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:47:00.445530 kubelet[2779]: E0702 00:47:00.444437 2779 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8888dfef-a6c5-44d0-8ca5-455c0955a733-config-volume podName:8888dfef-a6c5-44d0-8ca5-455c0955a733 nodeName:}" failed. No retries permitted until 2024-07-02 00:47:00.944390219 +0000 UTC m=+31.050604301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/8888dfef-a6c5-44d0-8ca5-455c0955a733-config-volume") pod "coredns-7db6d8ff4d-tncwx" (UID: "8888dfef-a6c5-44d0-8ca5-455c0955a733") : failed to sync configmap cache: timed out waiting for the condition Jul 2 00:47:01.040655 env[1747]: time="2024-07-02T00:47:01.040051385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-btwbc,Uid:110fb833-3324-46e3-8153-7ac355e3f002,Namespace:kube-system,Attempt:0,}" Jul 2 00:47:01.074945 env[1747]: time="2024-07-02T00:47:01.074105960Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tncwx,Uid:8888dfef-a6c5-44d0-8ca5-455c0955a733,Namespace:kube-system,Attempt:0,}" Jul 2 00:47:01.677671 systemd-networkd[1462]: cilium_host: Link UP Jul 2 00:47:01.681594 systemd-networkd[1462]: cilium_net: Link UP Jul 2 00:47:01.682696 systemd-networkd[1462]: cilium_net: Gained carrier Jul 2 00:47:01.683801 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Jul 2 00:47:01.683944 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 00:47:01.684315 systemd-networkd[1462]: cilium_host: Gained carrier Jul 2 00:47:01.685390 (udev-worker)[3511]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:47:01.690408 (udev-worker)[3575]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:47:01.860647 (udev-worker)[3592]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:47:01.874453 systemd-networkd[1462]: cilium_vxlan: Link UP Jul 2 00:47:01.874469 systemd-networkd[1462]: cilium_vxlan: Gained carrier Jul 2 00:47:02.399213 kernel: NET: Registered PF_ALG protocol family Jul 2 00:47:02.495504 systemd-networkd[1462]: cilium_host: Gained IPv6LL Jul 2 00:47:02.559394 systemd-networkd[1462]: cilium_net: Gained IPv6LL Jul 2 00:47:03.740363 (udev-worker)[3589]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:47:03.749412 systemd-networkd[1462]: lxc_health: Link UP Jul 2 00:47:03.763218 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 00:47:03.764009 systemd-networkd[1462]: lxc_health: Gained carrier Jul 2 00:47:03.839478 systemd-networkd[1462]: cilium_vxlan: Gained IPv6LL Jul 2 00:47:04.118495 systemd-networkd[1462]: lxcf45125a9206e: Link UP Jul 2 00:47:04.128234 kernel: eth0: renamed from tmp75db8 Jul 2 00:47:04.136496 systemd-networkd[1462]: lxcf45125a9206e: Gained carrier Jul 2 00:47:04.137206 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf45125a9206e: link becomes ready Jul 2 00:47:04.175208 systemd-networkd[1462]: lxc4786965e3db0: Link UP Jul 2 00:47:04.184236 kernel: eth0: renamed from tmpc0a24 Jul 2 00:47:04.194722 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc4786965e3db0: link becomes ready Jul 2 00:47:04.191896 systemd-networkd[1462]: lxc4786965e3db0: Gained carrier Jul 2 00:47:05.055468 systemd-networkd[1462]: lxc_health: Gained IPv6LL Jul 2 00:47:05.271234 kubelet[2779]: I0702 00:47:05.271126 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-p8tf8" podStartSLOduration=12.593849401 podStartE2EDuration="23.271104479s" podCreationTimestamp="2024-07-02 00:46:42 +0000 UTC" firstStartedPulling="2024-07-02 00:46:43.500236503 +0000 UTC m=+13.606450609" lastFinishedPulling="2024-07-02 00:46:54.177491605 +0000 UTC m=+24.283705687" observedRunningTime="2024-07-02 00:46:59.535690983 +0000 UTC m=+29.641905149" watchObservedRunningTime="2024-07-02 00:47:05.271104479 +0000 UTC m=+35.377318609" Jul 2 00:47:05.375505 systemd-networkd[1462]: lxcf45125a9206e: Gained IPv6LL Jul 2 00:47:05.439409 systemd-networkd[1462]: lxc4786965e3db0: Gained IPv6LL Jul 2 00:47:12.621681 env[1747]: time="2024-07-02T00:47:12.619575652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:47:12.621681 env[1747]: time="2024-07-02T00:47:12.619659752Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:47:12.621681 env[1747]: time="2024-07-02T00:47:12.619686921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:47:12.622904 env[1747]: time="2024-07-02T00:47:12.622602932Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75db823e69a2c3d9b26e8abd6797c6abd707db2033800e0af9dc70e321757684 pid=3950 runtime=io.containerd.runc.v2 Jul 2 00:47:12.672353 systemd[1]: Started cri-containerd-75db823e69a2c3d9b26e8abd6797c6abd707db2033800e0af9dc70e321757684.scope. Jul 2 00:47:12.693671 systemd[1]: run-containerd-runc-k8s.io-75db823e69a2c3d9b26e8abd6797c6abd707db2033800e0af9dc70e321757684-runc.ONUuYC.mount: Deactivated successfully. Jul 2 00:47:12.746013 env[1747]: time="2024-07-02T00:47:12.745857297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:47:12.746369 env[1747]: time="2024-07-02T00:47:12.746286319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:47:12.746744 env[1747]: time="2024-07-02T00:47:12.746665669Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:47:12.752002 env[1747]: time="2024-07-02T00:47:12.751880033Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0a249f350da09eb35b98c6308303f4461c76fd99cfe3b0ac4ba84121317a22b pid=3985 runtime=io.containerd.runc.v2 Jul 2 00:47:12.799275 systemd[1]: Started cri-containerd-c0a249f350da09eb35b98c6308303f4461c76fd99cfe3b0ac4ba84121317a22b.scope. Jul 2 00:47:12.884923 env[1747]: time="2024-07-02T00:47:12.883945555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-btwbc,Uid:110fb833-3324-46e3-8153-7ac355e3f002,Namespace:kube-system,Attempt:0,} returns sandbox id \"75db823e69a2c3d9b26e8abd6797c6abd707db2033800e0af9dc70e321757684\"" Jul 2 00:47:12.893732 env[1747]: time="2024-07-02T00:47:12.893677576Z" level=info msg="CreateContainer within sandbox \"75db823e69a2c3d9b26e8abd6797c6abd707db2033800e0af9dc70e321757684\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:47:12.924529 env[1747]: time="2024-07-02T00:47:12.924455103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tncwx,Uid:8888dfef-a6c5-44d0-8ca5-455c0955a733,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0a249f350da09eb35b98c6308303f4461c76fd99cfe3b0ac4ba84121317a22b\"" Jul 2 00:47:12.929850 env[1747]: time="2024-07-02T00:47:12.929761571Z" level=info msg="CreateContainer within sandbox \"c0a249f350da09eb35b98c6308303f4461c76fd99cfe3b0ac4ba84121317a22b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 00:47:12.937604 env[1747]: time="2024-07-02T00:47:12.937526901Z" level=info msg="CreateContainer within sandbox \"75db823e69a2c3d9b26e8abd6797c6abd707db2033800e0af9dc70e321757684\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5f14d30ee9387bda4ede6690eb904de7a614fbe92835e2a991c7ec212c755af5\"" Jul 2 00:47:12.940116 env[1747]: time="2024-07-02T00:47:12.940036884Z" level=info msg="StartContainer for \"5f14d30ee9387bda4ede6690eb904de7a614fbe92835e2a991c7ec212c755af5\"" Jul 2 00:47:12.960625 env[1747]: time="2024-07-02T00:47:12.960534706Z" level=info msg="CreateContainer within sandbox \"c0a249f350da09eb35b98c6308303f4461c76fd99cfe3b0ac4ba84121317a22b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"941dab1e57062b994069769c2503af8b911b8050ca9aa2bd944d44d221e7e300\"" Jul 2 00:47:12.961617 env[1747]: time="2024-07-02T00:47:12.961559964Z" level=info msg="StartContainer for \"941dab1e57062b994069769c2503af8b911b8050ca9aa2bd944d44d221e7e300\"" Jul 2 00:47:12.992485 systemd[1]: Started cri-containerd-5f14d30ee9387bda4ede6690eb904de7a614fbe92835e2a991c7ec212c755af5.scope. Jul 2 00:47:13.038297 systemd[1]: Started cri-containerd-941dab1e57062b994069769c2503af8b911b8050ca9aa2bd944d44d221e7e300.scope. Jul 2 00:47:13.099486 env[1747]: time="2024-07-02T00:47:13.099405522Z" level=info msg="StartContainer for \"5f14d30ee9387bda4ede6690eb904de7a614fbe92835e2a991c7ec212c755af5\" returns successfully" Jul 2 00:47:13.172724 env[1747]: time="2024-07-02T00:47:13.172584693Z" level=info msg="StartContainer for \"941dab1e57062b994069769c2503af8b911b8050ca9aa2bd944d44d221e7e300\" returns successfully" Jul 2 00:47:13.525876 kubelet[2779]: I0702 00:47:13.524958 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tncwx" podStartSLOduration=30.524934283 podStartE2EDuration="30.524934283s" podCreationTimestamp="2024-07-02 00:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:47:13.523442178 +0000 UTC m=+43.629656272" watchObservedRunningTime="2024-07-02 00:47:13.524934283 +0000 UTC m=+43.631148377" Jul 2 00:47:13.612797 kubelet[2779]: I0702 00:47:13.612714 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-btwbc" podStartSLOduration=30.612689387 podStartE2EDuration="30.612689387s" podCreationTimestamp="2024-07-02 00:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:47:13.569951134 +0000 UTC m=+43.676165264" watchObservedRunningTime="2024-07-02 00:47:13.612689387 +0000 UTC m=+43.718903481" Jul 2 00:47:14.731050 systemd[1]: Started sshd@5-172.31.20.46:22-139.178.89.65:35766.service. Jul 2 00:47:14.906652 sshd[4113]: Accepted publickey for core from 139.178.89.65 port 35766 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:14.909822 sshd[4113]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:14.922010 systemd[1]: Started session-6.scope. Jul 2 00:47:14.924346 systemd-logind[1738]: New session 6 of user core. Jul 2 00:47:15.211873 sshd[4113]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:15.217521 systemd[1]: sshd@5-172.31.20.46:22-139.178.89.65:35766.service: Deactivated successfully. Jul 2 00:47:15.218835 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 00:47:15.220339 systemd-logind[1738]: Session 6 logged out. Waiting for processes to exit. Jul 2 00:47:15.223037 systemd-logind[1738]: Removed session 6. Jul 2 00:47:20.243339 systemd[1]: Started sshd@6-172.31.20.46:22-139.178.89.65:56994.service. Jul 2 00:47:20.418295 sshd[4126]: Accepted publickey for core from 139.178.89.65 port 56994 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:20.420222 sshd[4126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:20.429950 systemd[1]: Started session-7.scope. Jul 2 00:47:20.431129 systemd-logind[1738]: New session 7 of user core. Jul 2 00:47:20.708939 sshd[4126]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:20.713946 systemd[1]: sshd@6-172.31.20.46:22-139.178.89.65:56994.service: Deactivated successfully. Jul 2 00:47:20.715341 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 00:47:20.716494 systemd-logind[1738]: Session 7 logged out. Waiting for processes to exit. Jul 2 00:47:20.717981 systemd-logind[1738]: Removed session 7. Jul 2 00:47:25.737466 systemd[1]: Started sshd@7-172.31.20.46:22-139.178.89.65:57000.service. Jul 2 00:47:25.907515 sshd[4141]: Accepted publickey for core from 139.178.89.65 port 57000 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:25.912035 sshd[4141]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:25.923679 systemd[1]: Started session-8.scope. Jul 2 00:47:25.925275 systemd-logind[1738]: New session 8 of user core. Jul 2 00:47:26.167834 sshd[4141]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:26.173965 systemd[1]: sshd@7-172.31.20.46:22-139.178.89.65:57000.service: Deactivated successfully. Jul 2 00:47:26.175371 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 00:47:26.176565 systemd-logind[1738]: Session 8 logged out. Waiting for processes to exit. Jul 2 00:47:26.178733 systemd-logind[1738]: Removed session 8. Jul 2 00:47:31.197347 systemd[1]: Started sshd@8-172.31.20.46:22-139.178.89.65:54540.service. Jul 2 00:47:31.370972 sshd[4159]: Accepted publickey for core from 139.178.89.65 port 54540 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:31.373642 sshd[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:31.387287 systemd-logind[1738]: New session 9 of user core. Jul 2 00:47:31.389276 systemd[1]: Started session-9.scope. Jul 2 00:47:31.649260 sshd[4159]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:31.654520 systemd-logind[1738]: Session 9 logged out. Waiting for processes to exit. Jul 2 00:47:31.654684 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 00:47:31.656254 systemd[1]: sshd@8-172.31.20.46:22-139.178.89.65:54540.service: Deactivated successfully. Jul 2 00:47:31.658589 systemd-logind[1738]: Removed session 9. Jul 2 00:47:36.679516 systemd[1]: Started sshd@9-172.31.20.46:22-139.178.89.65:54552.service. Jul 2 00:47:36.858349 sshd[4172]: Accepted publickey for core from 139.178.89.65 port 54552 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:36.861493 sshd[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:36.869255 systemd-logind[1738]: New session 10 of user core. Jul 2 00:47:36.870244 systemd[1]: Started session-10.scope. Jul 2 00:47:37.112942 sshd[4172]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:37.117944 systemd[1]: sshd@9-172.31.20.46:22-139.178.89.65:54552.service: Deactivated successfully. Jul 2 00:47:37.119331 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 00:47:37.120531 systemd-logind[1738]: Session 10 logged out. Waiting for processes to exit. Jul 2 00:47:37.122088 systemd-logind[1738]: Removed session 10. Jul 2 00:47:37.141297 systemd[1]: Started sshd@10-172.31.20.46:22-139.178.89.65:54556.service. Jul 2 00:47:37.314710 sshd[4185]: Accepted publickey for core from 139.178.89.65 port 54556 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:37.317400 sshd[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:37.327031 systemd[1]: Started session-11.scope. Jul 2 00:47:37.327842 systemd-logind[1738]: New session 11 of user core. Jul 2 00:47:37.645551 sshd[4185]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:37.651342 systemd[1]: sshd@10-172.31.20.46:22-139.178.89.65:54556.service: Deactivated successfully. Jul 2 00:47:37.652681 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 00:47:37.654028 systemd-logind[1738]: Session 11 logged out. Waiting for processes to exit. Jul 2 00:47:37.656030 systemd-logind[1738]: Removed session 11. Jul 2 00:47:37.680940 systemd[1]: Started sshd@11-172.31.20.46:22-139.178.89.65:54572.service. Jul 2 00:47:37.857452 sshd[4195]: Accepted publickey for core from 139.178.89.65 port 54572 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:37.860625 sshd[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:37.870416 systemd[1]: Started session-12.scope. Jul 2 00:47:37.871272 systemd-logind[1738]: New session 12 of user core. Jul 2 00:47:38.129286 sshd[4195]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:38.134537 systemd-logind[1738]: Session 12 logged out. Waiting for processes to exit. Jul 2 00:47:38.135148 systemd[1]: sshd@11-172.31.20.46:22-139.178.89.65:54572.service: Deactivated successfully. Jul 2 00:47:38.136493 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 00:47:38.138409 systemd-logind[1738]: Removed session 12. Jul 2 00:47:43.158340 systemd[1]: Started sshd@12-172.31.20.46:22-139.178.89.65:38544.service. Jul 2 00:47:43.334439 sshd[4208]: Accepted publickey for core from 139.178.89.65 port 38544 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:43.337138 sshd[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:43.346575 systemd[1]: Started session-13.scope. Jul 2 00:47:43.347265 systemd-logind[1738]: New session 13 of user core. Jul 2 00:47:43.594054 sshd[4208]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:43.600831 systemd-logind[1738]: Session 13 logged out. Waiting for processes to exit. Jul 2 00:47:43.601439 systemd[1]: sshd@12-172.31.20.46:22-139.178.89.65:38544.service: Deactivated successfully. Jul 2 00:47:43.602779 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 00:47:43.604501 systemd-logind[1738]: Removed session 13. Jul 2 00:47:48.624617 systemd[1]: Started sshd@13-172.31.20.46:22-139.178.89.65:56228.service. Jul 2 00:47:48.805054 sshd[4222]: Accepted publickey for core from 139.178.89.65 port 56228 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:48.808256 sshd[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:48.817290 systemd[1]: Started session-14.scope. Jul 2 00:47:48.818421 systemd-logind[1738]: New session 14 of user core. Jul 2 00:47:49.066531 sshd[4222]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:49.071840 systemd-logind[1738]: Session 14 logged out. Waiting for processes to exit. Jul 2 00:47:49.072282 systemd[1]: sshd@13-172.31.20.46:22-139.178.89.65:56228.service: Deactivated successfully. Jul 2 00:47:49.073629 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 00:47:49.075147 systemd-logind[1738]: Removed session 14. Jul 2 00:47:54.096907 systemd[1]: Started sshd@14-172.31.20.46:22-139.178.89.65:56238.service. Jul 2 00:47:54.273583 sshd[4236]: Accepted publickey for core from 139.178.89.65 port 56238 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:54.276117 sshd[4236]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:54.284413 systemd-logind[1738]: New session 15 of user core. Jul 2 00:47:54.285332 systemd[1]: Started session-15.scope. Jul 2 00:47:54.540583 sshd[4236]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:54.546266 systemd[1]: sshd@14-172.31.20.46:22-139.178.89.65:56238.service: Deactivated successfully. Jul 2 00:47:54.547613 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 00:47:54.549292 systemd-logind[1738]: Session 15 logged out. Waiting for processes to exit. Jul 2 00:47:54.551813 systemd-logind[1738]: Removed session 15. Jul 2 00:47:54.570363 systemd[1]: Started sshd@15-172.31.20.46:22-139.178.89.65:56242.service. Jul 2 00:47:54.745702 sshd[4248]: Accepted publickey for core from 139.178.89.65 port 56242 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:54.749044 sshd[4248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:54.758336 systemd-logind[1738]: New session 16 of user core. Jul 2 00:47:54.758821 systemd[1]: Started session-16.scope. Jul 2 00:47:55.065596 sshd[4248]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:55.071004 systemd-logind[1738]: Session 16 logged out. Waiting for processes to exit. Jul 2 00:47:55.071661 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 00:47:55.073108 systemd[1]: sshd@15-172.31.20.46:22-139.178.89.65:56242.service: Deactivated successfully. Jul 2 00:47:55.074770 systemd-logind[1738]: Removed session 16. Jul 2 00:47:55.092714 systemd[1]: Started sshd@16-172.31.20.46:22-139.178.89.65:56254.service. Jul 2 00:47:55.265896 sshd[4257]: Accepted publickey for core from 139.178.89.65 port 56254 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:55.268588 sshd[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:55.277517 systemd[1]: Started session-17.scope. Jul 2 00:47:55.278356 systemd-logind[1738]: New session 17 of user core. Jul 2 00:47:57.824372 sshd[4257]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:57.831152 systemd[1]: sshd@16-172.31.20.46:22-139.178.89.65:56254.service: Deactivated successfully. Jul 2 00:47:57.832561 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 00:47:57.834801 systemd-logind[1738]: Session 17 logged out. Waiting for processes to exit. Jul 2 00:47:57.836782 systemd-logind[1738]: Removed session 17. Jul 2 00:47:57.858410 systemd[1]: Started sshd@17-172.31.20.46:22-139.178.89.65:56264.service. Jul 2 00:47:58.034476 sshd[4274]: Accepted publickey for core from 139.178.89.65 port 56264 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:58.037078 sshd[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:58.046246 systemd[1]: Started session-18.scope. Jul 2 00:47:58.046992 systemd-logind[1738]: New session 18 of user core. Jul 2 00:47:58.532143 sshd[4274]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:58.538868 systemd[1]: sshd@17-172.31.20.46:22-139.178.89.65:56264.service: Deactivated successfully. Jul 2 00:47:58.540449 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 00:47:58.541931 systemd-logind[1738]: Session 18 logged out. Waiting for processes to exit. Jul 2 00:47:58.544109 systemd-logind[1738]: Removed session 18. Jul 2 00:47:58.561308 systemd[1]: Started sshd@18-172.31.20.46:22-139.178.89.65:49110.service. Jul 2 00:47:58.738821 sshd[4284]: Accepted publickey for core from 139.178.89.65 port 49110 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:47:58.741619 sshd[4284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:47:58.751313 systemd[1]: Started session-19.scope. Jul 2 00:47:58.752390 systemd-logind[1738]: New session 19 of user core. Jul 2 00:47:59.001831 sshd[4284]: pam_unix(sshd:session): session closed for user core Jul 2 00:47:59.006699 systemd[1]: sshd@18-172.31.20.46:22-139.178.89.65:49110.service: Deactivated successfully. Jul 2 00:47:59.008047 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 00:47:59.010331 systemd-logind[1738]: Session 19 logged out. Waiting for processes to exit. Jul 2 00:47:59.012224 systemd-logind[1738]: Removed session 19. Jul 2 00:48:04.029084 systemd[1]: Started sshd@19-172.31.20.46:22-139.178.89.65:49118.service. Jul 2 00:48:04.199625 sshd[4296]: Accepted publickey for core from 139.178.89.65 port 49118 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:48:04.202935 sshd[4296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:04.212155 systemd-logind[1738]: New session 20 of user core. Jul 2 00:48:04.212752 systemd[1]: Started session-20.scope. Jul 2 00:48:04.462618 sshd[4296]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:04.467634 systemd-logind[1738]: Session 20 logged out. Waiting for processes to exit. Jul 2 00:48:04.468242 systemd[1]: sshd@19-172.31.20.46:22-139.178.89.65:49118.service: Deactivated successfully. Jul 2 00:48:04.469972 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 00:48:04.471839 systemd-logind[1738]: Removed session 20. Jul 2 00:48:09.495463 systemd[1]: Started sshd@20-172.31.20.46:22-139.178.89.65:49414.service. Jul 2 00:48:09.675785 sshd[4311]: Accepted publickey for core from 139.178.89.65 port 49414 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:48:09.678852 sshd[4311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:09.686436 systemd-logind[1738]: New session 21 of user core. Jul 2 00:48:09.687860 systemd[1]: Started session-21.scope. Jul 2 00:48:09.942550 sshd[4311]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:09.947386 systemd-logind[1738]: Session 21 logged out. Waiting for processes to exit. Jul 2 00:48:09.948609 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 00:48:09.949742 systemd[1]: sshd@20-172.31.20.46:22-139.178.89.65:49414.service: Deactivated successfully. Jul 2 00:48:09.953122 systemd-logind[1738]: Removed session 21. Jul 2 00:48:14.973598 systemd[1]: Started sshd@21-172.31.20.46:22-139.178.89.65:49428.service. Jul 2 00:48:15.151229 sshd[4325]: Accepted publickey for core from 139.178.89.65 port 49428 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:48:15.153883 sshd[4325]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:15.162347 systemd-logind[1738]: New session 22 of user core. Jul 2 00:48:15.163381 systemd[1]: Started session-22.scope. Jul 2 00:48:15.404704 sshd[4325]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:15.409785 systemd-logind[1738]: Session 22 logged out. Waiting for processes to exit. Jul 2 00:48:15.410463 systemd[1]: sshd@21-172.31.20.46:22-139.178.89.65:49428.service: Deactivated successfully. Jul 2 00:48:15.411746 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 00:48:15.414430 systemd-logind[1738]: Removed session 22. Jul 2 00:48:20.433379 systemd[1]: Started sshd@22-172.31.20.46:22-139.178.89.65:46982.service. Jul 2 00:48:20.607429 sshd[4336]: Accepted publickey for core from 139.178.89.65 port 46982 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:48:20.610003 sshd[4336]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:20.617515 systemd-logind[1738]: New session 23 of user core. Jul 2 00:48:20.619705 systemd[1]: Started session-23.scope. Jul 2 00:48:20.868721 sshd[4336]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:20.872927 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 00:48:20.874129 systemd-logind[1738]: Session 23 logged out. Waiting for processes to exit. Jul 2 00:48:20.874550 systemd[1]: sshd@22-172.31.20.46:22-139.178.89.65:46982.service: Deactivated successfully. Jul 2 00:48:20.877046 systemd-logind[1738]: Removed session 23. Jul 2 00:48:20.895227 systemd[1]: Started sshd@23-172.31.20.46:22-139.178.89.65:46992.service. Jul 2 00:48:21.077078 sshd[4348]: Accepted publickey for core from 139.178.89.65 port 46992 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:48:21.078778 sshd[4348]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:21.086272 systemd-logind[1738]: New session 24 of user core. Jul 2 00:48:21.088422 systemd[1]: Started session-24.scope. Jul 2 00:48:22.988522 env[1747]: time="2024-07-02T00:48:22.988315129Z" level=info msg="StopContainer for \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\" with timeout 30 (s)" Jul 2 00:48:22.989128 env[1747]: time="2024-07-02T00:48:22.988875242Z" level=info msg="Stop container \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\" with signal terminated" Jul 2 00:48:23.022280 systemd[1]: cri-containerd-f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a.scope: Deactivated successfully. Jul 2 00:48:23.042475 env[1747]: time="2024-07-02T00:48:23.042380206Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 00:48:23.059840 env[1747]: time="2024-07-02T00:48:23.059765277Z" level=info msg="StopContainer for \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\" with timeout 2 (s)" Jul 2 00:48:23.060407 env[1747]: time="2024-07-02T00:48:23.060350314Z" level=info msg="Stop container \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\" with signal terminated" Jul 2 00:48:23.079198 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a-rootfs.mount: Deactivated successfully. Jul 2 00:48:23.093726 systemd-networkd[1462]: lxc_health: Link DOWN Jul 2 00:48:23.093740 systemd-networkd[1462]: lxc_health: Lost carrier Jul 2 00:48:23.102883 env[1747]: time="2024-07-02T00:48:23.102807723Z" level=info msg="shim disconnected" id=f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a Jul 2 00:48:23.103139 env[1747]: time="2024-07-02T00:48:23.102880769Z" level=warning msg="cleaning up after shim disconnected" id=f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a namespace=k8s.io Jul 2 00:48:23.103139 env[1747]: time="2024-07-02T00:48:23.102903090Z" level=info msg="cleaning up dead shim" Jul 2 00:48:23.129123 systemd[1]: cri-containerd-f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda.scope: Deactivated successfully. Jul 2 00:48:23.129775 systemd[1]: cri-containerd-f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda.scope: Consumed 14.537s CPU time. Jul 2 00:48:23.131855 env[1747]: time="2024-07-02T00:48:23.131785754Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:48:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4400 runtime=io.containerd.runc.v2\n" Jul 2 00:48:23.140359 env[1747]: time="2024-07-02T00:48:23.140285355Z" level=info msg="StopContainer for \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\" returns successfully" Jul 2 00:48:23.141424 env[1747]: time="2024-07-02T00:48:23.141371404Z" level=info msg="StopPodSandbox for \"77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165\"" Jul 2 00:48:23.141718 env[1747]: time="2024-07-02T00:48:23.141675179Z" level=info msg="Container to stop \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:48:23.145364 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165-shm.mount: Deactivated successfully. Jul 2 00:48:23.163091 systemd[1]: cri-containerd-77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165.scope: Deactivated successfully. Jul 2 00:48:23.190348 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda-rootfs.mount: Deactivated successfully. Jul 2 00:48:23.208609 env[1747]: time="2024-07-02T00:48:23.208535298Z" level=info msg="shim disconnected" id=f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda Jul 2 00:48:23.208905 env[1747]: time="2024-07-02T00:48:23.208608164Z" level=warning msg="cleaning up after shim disconnected" id=f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda namespace=k8s.io Jul 2 00:48:23.208905 env[1747]: time="2024-07-02T00:48:23.208631085Z" level=info msg="cleaning up dead shim" Jul 2 00:48:23.221615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165-rootfs.mount: Deactivated successfully. Jul 2 00:48:23.231503 env[1747]: time="2024-07-02T00:48:23.231438395Z" level=info msg="shim disconnected" id=77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165 Jul 2 00:48:23.232754 env[1747]: time="2024-07-02T00:48:23.232707640Z" level=warning msg="cleaning up after shim disconnected" id=77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165 namespace=k8s.io Jul 2 00:48:23.232946 env[1747]: time="2024-07-02T00:48:23.232916457Z" level=info msg="cleaning up dead shim" Jul 2 00:48:23.234677 env[1747]: time="2024-07-02T00:48:23.234625703Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:48:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4447 runtime=io.containerd.runc.v2\n" Jul 2 00:48:23.238305 env[1747]: time="2024-07-02T00:48:23.238162784Z" level=info msg="StopContainer for \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\" returns successfully" Jul 2 00:48:23.240543 env[1747]: time="2024-07-02T00:48:23.239140818Z" level=info msg="StopPodSandbox for \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\"" Jul 2 00:48:23.240543 env[1747]: time="2024-07-02T00:48:23.239255193Z" level=info msg="Container to stop \"b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:48:23.240543 env[1747]: time="2024-07-02T00:48:23.239287953Z" level=info msg="Container to stop \"e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:48:23.240543 env[1747]: time="2024-07-02T00:48:23.239317486Z" level=info msg="Container to stop \"bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:48:23.240543 env[1747]: time="2024-07-02T00:48:23.239346491Z" level=info msg="Container to stop \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:48:23.240543 env[1747]: time="2024-07-02T00:48:23.239376035Z" level=info msg="Container to stop \"ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:48:23.256209 systemd[1]: cri-containerd-c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329.scope: Deactivated successfully. Jul 2 00:48:23.259054 env[1747]: time="2024-07-02T00:48:23.258974845Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:48:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4461 runtime=io.containerd.runc.v2\n" Jul 2 00:48:23.260243 env[1747]: time="2024-07-02T00:48:23.260148087Z" level=info msg="TearDown network for sandbox \"77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165\" successfully" Jul 2 00:48:23.260467 env[1747]: time="2024-07-02T00:48:23.260431774Z" level=info msg="StopPodSandbox for \"77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165\" returns successfully" Jul 2 00:48:23.323260 env[1747]: time="2024-07-02T00:48:23.323183432Z" level=info msg="shim disconnected" id=c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329 Jul 2 00:48:23.323559 env[1747]: time="2024-07-02T00:48:23.323258962Z" level=warning msg="cleaning up after shim disconnected" id=c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329 namespace=k8s.io Jul 2 00:48:23.323559 env[1747]: time="2024-07-02T00:48:23.323281930Z" level=info msg="cleaning up dead shim" Jul 2 00:48:23.339882 kubelet[2779]: I0702 00:48:23.337860 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/639369f2-44d1-4d90-9c4a-080c7e8644a7-cilium-config-path\") pod \"639369f2-44d1-4d90-9c4a-080c7e8644a7\" (UID: \"639369f2-44d1-4d90-9c4a-080c7e8644a7\") " Jul 2 00:48:23.339882 kubelet[2779]: I0702 00:48:23.337952 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m7grm\" (UniqueName: \"kubernetes.io/projected/639369f2-44d1-4d90-9c4a-080c7e8644a7-kube-api-access-m7grm\") pod \"639369f2-44d1-4d90-9c4a-080c7e8644a7\" (UID: \"639369f2-44d1-4d90-9c4a-080c7e8644a7\") " Jul 2 00:48:23.343530 kubelet[2779]: I0702 00:48:23.343458 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/639369f2-44d1-4d90-9c4a-080c7e8644a7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "639369f2-44d1-4d90-9c4a-080c7e8644a7" (UID: "639369f2-44d1-4d90-9c4a-080c7e8644a7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:48:23.346730 kubelet[2779]: I0702 00:48:23.346674 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/639369f2-44d1-4d90-9c4a-080c7e8644a7-kube-api-access-m7grm" (OuterVolumeSpecName: "kube-api-access-m7grm") pod "639369f2-44d1-4d90-9c4a-080c7e8644a7" (UID: "639369f2-44d1-4d90-9c4a-080c7e8644a7"). InnerVolumeSpecName "kube-api-access-m7grm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:48:23.348053 env[1747]: time="2024-07-02T00:48:23.347987492Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:48:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4494 runtime=io.containerd.runc.v2\n" Jul 2 00:48:23.348697 env[1747]: time="2024-07-02T00:48:23.348644819Z" level=info msg="TearDown network for sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" successfully" Jul 2 00:48:23.348697 env[1747]: time="2024-07-02T00:48:23.348695988Z" level=info msg="StopPodSandbox for \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" returns successfully" Jul 2 00:48:23.438822 kubelet[2779]: I0702 00:48:23.438753 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-cilium-run\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.439111 kubelet[2779]: I0702 00:48:23.439084 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-bpf-maps\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.439330 kubelet[2779]: I0702 00:48:23.439304 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2v5x\" (UniqueName: \"kubernetes.io/projected/215566c9-7640-4066-a908-52d8ee593c22-kube-api-access-t2v5x\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.439504 kubelet[2779]: I0702 00:48:23.439478 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/215566c9-7640-4066-a908-52d8ee593c22-clustermesh-secrets\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.439650 kubelet[2779]: I0702 00:48:23.439625 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-xtables-lock\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.439807 kubelet[2779]: I0702 00:48:23.439780 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-host-proc-sys-kernel\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.439972 kubelet[2779]: I0702 00:48:23.439947 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-hostproc\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.440146 kubelet[2779]: I0702 00:48:23.440107 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/215566c9-7640-4066-a908-52d8ee593c22-cilium-config-path\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.440345 kubelet[2779]: I0702 00:48:23.440318 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-cilium-cgroup\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.440490 kubelet[2779]: I0702 00:48:23.440465 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-host-proc-sys-net\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.440627 kubelet[2779]: I0702 00:48:23.440603 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-cni-path\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.440762 kubelet[2779]: I0702 00:48:23.440738 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-etc-cni-netd\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.440907 kubelet[2779]: I0702 00:48:23.440882 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-lib-modules\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.441061 kubelet[2779]: I0702 00:48:23.441036 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/215566c9-7640-4066-a908-52d8ee593c22-hubble-tls\") pod \"215566c9-7640-4066-a908-52d8ee593c22\" (UID: \"215566c9-7640-4066-a908-52d8ee593c22\") " Jul 2 00:48:23.441264 kubelet[2779]: I0702 00:48:23.441237 2779 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/639369f2-44d1-4d90-9c4a-080c7e8644a7-cilium-config-path\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.441407 kubelet[2779]: I0702 00:48:23.441384 2779 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-m7grm\" (UniqueName: \"kubernetes.io/projected/639369f2-44d1-4d90-9c4a-080c7e8644a7-kube-api-access-m7grm\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.444306 kubelet[2779]: I0702 00:48:23.438835 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:23.444474 kubelet[2779]: I0702 00:48:23.439190 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:23.444668 kubelet[2779]: I0702 00:48:23.444617 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:23.444826 kubelet[2779]: I0702 00:48:23.444798 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:23.444978 kubelet[2779]: I0702 00:48:23.444949 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-cni-path" (OuterVolumeSpecName: "cni-path") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:23.445133 kubelet[2779]: I0702 00:48:23.445104 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:23.445334 kubelet[2779]: I0702 00:48:23.445308 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:23.445495 kubelet[2779]: I0702 00:48:23.445461 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:23.445642 kubelet[2779]: I0702 00:48:23.445616 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:23.445787 kubelet[2779]: I0702 00:48:23.445761 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-hostproc" (OuterVolumeSpecName: "hostproc") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:23.448445 kubelet[2779]: I0702 00:48:23.448392 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215566c9-7640-4066-a908-52d8ee593c22-kube-api-access-t2v5x" (OuterVolumeSpecName: "kube-api-access-t2v5x") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "kube-api-access-t2v5x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:48:23.449546 kubelet[2779]: I0702 00:48:23.449498 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/215566c9-7640-4066-a908-52d8ee593c22-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:48:23.450401 kubelet[2779]: I0702 00:48:23.450339 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/215566c9-7640-4066-a908-52d8ee593c22-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:48:23.455025 kubelet[2779]: I0702 00:48:23.454967 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/215566c9-7640-4066-a908-52d8ee593c22-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "215566c9-7640-4066-a908-52d8ee593c22" (UID: "215566c9-7640-4066-a908-52d8ee593c22"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:48:23.543803 kubelet[2779]: I0702 00:48:23.542488 2779 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-cni-path\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.543803 kubelet[2779]: I0702 00:48:23.543118 2779 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-etc-cni-netd\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.543803 kubelet[2779]: I0702 00:48:23.543161 2779 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/215566c9-7640-4066-a908-52d8ee593c22-hubble-tls\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.543803 kubelet[2779]: I0702 00:48:23.543206 2779 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-lib-modules\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.543803 kubelet[2779]: I0702 00:48:23.543228 2779 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-bpf-maps\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.543803 kubelet[2779]: I0702 00:48:23.543251 2779 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-t2v5x\" (UniqueName: \"kubernetes.io/projected/215566c9-7640-4066-a908-52d8ee593c22-kube-api-access-t2v5x\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.543803 kubelet[2779]: I0702 00:48:23.543279 2779 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-cilium-run\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.543803 kubelet[2779]: I0702 00:48:23.543305 2779 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/215566c9-7640-4066-a908-52d8ee593c22-clustermesh-secrets\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.544408 kubelet[2779]: I0702 00:48:23.543326 2779 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-xtables-lock\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.544408 kubelet[2779]: I0702 00:48:23.543346 2779 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-host-proc-sys-kernel\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.544408 kubelet[2779]: I0702 00:48:23.543367 2779 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-hostproc\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.544408 kubelet[2779]: I0702 00:48:23.543389 2779 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-host-proc-sys-net\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.544408 kubelet[2779]: I0702 00:48:23.543409 2779 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/215566c9-7640-4066-a908-52d8ee593c22-cilium-config-path\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.544408 kubelet[2779]: I0702 00:48:23.543428 2779 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/215566c9-7640-4066-a908-52d8ee593c22-cilium-cgroup\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:23.704195 kubelet[2779]: I0702 00:48:23.704093 2779 scope.go:117] "RemoveContainer" containerID="f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda" Jul 2 00:48:23.710921 env[1747]: time="2024-07-02T00:48:23.710869907Z" level=info msg="RemoveContainer for \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\"" Jul 2 00:48:23.717088 systemd[1]: Removed slice kubepods-burstable-pod215566c9_7640_4066_a908_52d8ee593c22.slice. Jul 2 00:48:23.717331 systemd[1]: kubepods-burstable-pod215566c9_7640_4066_a908_52d8ee593c22.slice: Consumed 14.771s CPU time. Jul 2 00:48:23.727405 systemd[1]: Removed slice kubepods-besteffort-pod639369f2_44d1_4d90_9c4a_080c7e8644a7.slice. Jul 2 00:48:23.732544 env[1747]: time="2024-07-02T00:48:23.732481127Z" level=info msg="RemoveContainer for \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\" returns successfully" Jul 2 00:48:23.734451 kubelet[2779]: I0702 00:48:23.733423 2779 scope.go:117] "RemoveContainer" containerID="bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c" Jul 2 00:48:23.740701 env[1747]: time="2024-07-02T00:48:23.740636224Z" level=info msg="RemoveContainer for \"bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c\"" Jul 2 00:48:23.752923 env[1747]: time="2024-07-02T00:48:23.752601072Z" level=info msg="RemoveContainer for \"bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c\" returns successfully" Jul 2 00:48:23.753397 kubelet[2779]: I0702 00:48:23.753362 2779 scope.go:117] "RemoveContainer" containerID="e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf" Jul 2 00:48:23.760488 env[1747]: time="2024-07-02T00:48:23.759537685Z" level=info msg="RemoveContainer for \"e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf\"" Jul 2 00:48:23.783053 env[1747]: time="2024-07-02T00:48:23.782993215Z" level=info msg="RemoveContainer for \"e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf\" returns successfully" Jul 2 00:48:23.783841 kubelet[2779]: I0702 00:48:23.783620 2779 scope.go:117] "RemoveContainer" containerID="ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a" Jul 2 00:48:23.785787 env[1747]: time="2024-07-02T00:48:23.785739213Z" level=info msg="RemoveContainer for \"ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a\"" Jul 2 00:48:23.790881 env[1747]: time="2024-07-02T00:48:23.790825337Z" level=info msg="RemoveContainer for \"ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a\" returns successfully" Jul 2 00:48:23.791661 kubelet[2779]: I0702 00:48:23.791470 2779 scope.go:117] "RemoveContainer" containerID="b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f" Jul 2 00:48:23.793704 env[1747]: time="2024-07-02T00:48:23.793590175Z" level=info msg="RemoveContainer for \"b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f\"" Jul 2 00:48:23.799576 env[1747]: time="2024-07-02T00:48:23.799418852Z" level=info msg="RemoveContainer for \"b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f\" returns successfully" Jul 2 00:48:23.800839 kubelet[2779]: I0702 00:48:23.800619 2779 scope.go:117] "RemoveContainer" containerID="f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda" Jul 2 00:48:23.801519 env[1747]: time="2024-07-02T00:48:23.801397193Z" level=error msg="ContainerStatus for \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\": not found" Jul 2 00:48:23.801890 kubelet[2779]: E0702 00:48:23.801848 2779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\": not found" containerID="f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda" Jul 2 00:48:23.802414 kubelet[2779]: I0702 00:48:23.801905 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda"} err="failed to get container status \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\": rpc error: code = NotFound desc = an error occurred when try to find container \"f8f446d32488e36af9c60327f7f9240f31d88fbc746d4d124c95b60be777bcda\": not found" Jul 2 00:48:23.802414 kubelet[2779]: I0702 00:48:23.802408 2779 scope.go:117] "RemoveContainer" containerID="bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c" Jul 2 00:48:23.802956 env[1747]: time="2024-07-02T00:48:23.802865234Z" level=error msg="ContainerStatus for \"bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c\": not found" Jul 2 00:48:23.803617 kubelet[2779]: E0702 00:48:23.803341 2779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c\": not found" containerID="bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c" Jul 2 00:48:23.803617 kubelet[2779]: I0702 00:48:23.803411 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c"} err="failed to get container status \"bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c\": rpc error: code = NotFound desc = an error occurred when try to find container \"bac32b428d7e72e75bd2048136a359f9e2183624bc3dd7c7313d0a5249d34d7c\": not found" Jul 2 00:48:23.803617 kubelet[2779]: I0702 00:48:23.803447 2779 scope.go:117] "RemoveContainer" containerID="e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf" Jul 2 00:48:23.804430 env[1747]: time="2024-07-02T00:48:23.804338736Z" level=error msg="ContainerStatus for \"e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf\": not found" Jul 2 00:48:23.805232 kubelet[2779]: E0702 00:48:23.804908 2779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf\": not found" containerID="e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf" Jul 2 00:48:23.805232 kubelet[2779]: I0702 00:48:23.804955 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf"} err="failed to get container status \"e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf\": rpc error: code = NotFound desc = an error occurred when try to find container \"e67aa17d1dc2773f9865aa7ff09ee9a1c9fdc3b3c5bee96b00454963f5799ccf\": not found" Jul 2 00:48:23.805232 kubelet[2779]: I0702 00:48:23.805012 2779 scope.go:117] "RemoveContainer" containerID="ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a" Jul 2 00:48:23.805540 env[1747]: time="2024-07-02T00:48:23.805444297Z" level=error msg="ContainerStatus for \"ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a\": not found" Jul 2 00:48:23.806097 kubelet[2779]: E0702 00:48:23.805831 2779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a\": not found" containerID="ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a" Jul 2 00:48:23.806097 kubelet[2779]: I0702 00:48:23.805898 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a"} err="failed to get container status \"ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a\": rpc error: code = NotFound desc = an error occurred when try to find container \"ec5bdc30cee7f4cb07bcc526ae3aa854246b02a9ff18486cc2995c09a23caa9a\": not found" Jul 2 00:48:23.806097 kubelet[2779]: I0702 00:48:23.805931 2779 scope.go:117] "RemoveContainer" containerID="b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f" Jul 2 00:48:23.806376 env[1747]: time="2024-07-02T00:48:23.806301548Z" level=error msg="ContainerStatus for \"b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f\": not found" Jul 2 00:48:23.806682 kubelet[2779]: E0702 00:48:23.806637 2779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f\": not found" containerID="b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f" Jul 2 00:48:23.806809 kubelet[2779]: I0702 00:48:23.806685 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f"} err="failed to get container status \"b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f\": rpc error: code = NotFound desc = an error occurred when try to find container \"b299e7b883073be1aec111d89f72381d2de1c849f9238210a1a5d88a876ef70f\": not found" Jul 2 00:48:23.806809 kubelet[2779]: I0702 00:48:23.806726 2779 scope.go:117] "RemoveContainer" containerID="f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a" Jul 2 00:48:23.809207 env[1747]: time="2024-07-02T00:48:23.809124324Z" level=info msg="RemoveContainer for \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\"" Jul 2 00:48:23.818932 env[1747]: time="2024-07-02T00:48:23.818834153Z" level=info msg="RemoveContainer for \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\" returns successfully" Jul 2 00:48:23.819505 kubelet[2779]: I0702 00:48:23.819388 2779 scope.go:117] "RemoveContainer" containerID="f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a" Jul 2 00:48:23.820209 env[1747]: time="2024-07-02T00:48:23.820083142Z" level=error msg="ContainerStatus for \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\": not found" Jul 2 00:48:23.820655 kubelet[2779]: E0702 00:48:23.820555 2779 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\": not found" containerID="f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a" Jul 2 00:48:23.820861 kubelet[2779]: I0702 00:48:23.820822 2779 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a"} err="failed to get container status \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\": rpc error: code = NotFound desc = an error occurred when try to find container \"f76fd64f06f39d7610286b34363cb683d3e068ef2a4123ef23ed7d10b348226a\": not found" Jul 2 00:48:23.980497 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329-rootfs.mount: Deactivated successfully. Jul 2 00:48:23.980666 systemd[1]: var-lib-kubelet-pods-639369f2\x2d44d1\x2d4d90\x2d9c4a\x2d080c7e8644a7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dm7grm.mount: Deactivated successfully. Jul 2 00:48:23.980810 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329-shm.mount: Deactivated successfully. Jul 2 00:48:23.980950 systemd[1]: var-lib-kubelet-pods-215566c9\x2d7640\x2d4066\x2da908\x2d52d8ee593c22-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dt2v5x.mount: Deactivated successfully. Jul 2 00:48:23.981090 systemd[1]: var-lib-kubelet-pods-215566c9\x2d7640\x2d4066\x2da908\x2d52d8ee593c22-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:48:23.981257 systemd[1]: var-lib-kubelet-pods-215566c9\x2d7640\x2d4066\x2da908\x2d52d8ee593c22-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:48:24.300247 kubelet[2779]: I0702 00:48:24.300191 2779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="215566c9-7640-4066-a908-52d8ee593c22" path="/var/lib/kubelet/pods/215566c9-7640-4066-a908-52d8ee593c22/volumes" Jul 2 00:48:24.301715 kubelet[2779]: I0702 00:48:24.301663 2779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="639369f2-44d1-4d90-9c4a-080c7e8644a7" path="/var/lib/kubelet/pods/639369f2-44d1-4d90-9c4a-080c7e8644a7/volumes" Jul 2 00:48:24.909984 sshd[4348]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:24.915049 systemd[1]: sshd@23-172.31.20.46:22-139.178.89.65:46992.service: Deactivated successfully. Jul 2 00:48:24.916844 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 00:48:24.917243 systemd[1]: session-24.scope: Consumed 1.132s CPU time. Jul 2 00:48:24.918370 systemd-logind[1738]: Session 24 logged out. Waiting for processes to exit. Jul 2 00:48:24.920460 systemd-logind[1738]: Removed session 24. Jul 2 00:48:24.939643 systemd[1]: Started sshd@24-172.31.20.46:22-139.178.89.65:47008.service. Jul 2 00:48:25.108027 sshd[4513]: Accepted publickey for core from 139.178.89.65 port 47008 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:48:25.111249 sshd[4513]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:25.119254 systemd-logind[1738]: New session 25 of user core. Jul 2 00:48:25.120245 systemd[1]: Started session-25.scope. Jul 2 00:48:25.419064 kubelet[2779]: E0702 00:48:25.419018 2779 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:48:26.504688 sshd[4513]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:26.510215 systemd[1]: sshd@24-172.31.20.46:22-139.178.89.65:47008.service: Deactivated successfully. Jul 2 00:48:26.511582 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 00:48:26.511913 systemd[1]: session-25.scope: Consumed 1.171s CPU time. Jul 2 00:48:26.514019 systemd-logind[1738]: Session 25 logged out. Waiting for processes to exit. Jul 2 00:48:26.515641 systemd-logind[1738]: Removed session 25. Jul 2 00:48:26.536597 systemd[1]: Started sshd@25-172.31.20.46:22-139.178.89.65:47010.service. Jul 2 00:48:26.550362 kubelet[2779]: I0702 00:48:26.550287 2779 topology_manager.go:215] "Topology Admit Handler" podUID="89c665ed-ea50-4f7f-ba60-fa899a2c45ca" podNamespace="kube-system" podName="cilium-f97f9" Jul 2 00:48:26.550899 kubelet[2779]: E0702 00:48:26.550383 2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="215566c9-7640-4066-a908-52d8ee593c22" containerName="apply-sysctl-overwrites" Jul 2 00:48:26.550899 kubelet[2779]: E0702 00:48:26.550405 2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="215566c9-7640-4066-a908-52d8ee593c22" containerName="mount-bpf-fs" Jul 2 00:48:26.550899 kubelet[2779]: E0702 00:48:26.550421 2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="639369f2-44d1-4d90-9c4a-080c7e8644a7" containerName="cilium-operator" Jul 2 00:48:26.550899 kubelet[2779]: E0702 00:48:26.550436 2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="215566c9-7640-4066-a908-52d8ee593c22" containerName="clean-cilium-state" Jul 2 00:48:26.550899 kubelet[2779]: E0702 00:48:26.550451 2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="215566c9-7640-4066-a908-52d8ee593c22" containerName="cilium-agent" Jul 2 00:48:26.550899 kubelet[2779]: E0702 00:48:26.550469 2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="215566c9-7640-4066-a908-52d8ee593c22" containerName="mount-cgroup" Jul 2 00:48:26.550899 kubelet[2779]: I0702 00:48:26.550514 2779 memory_manager.go:354] "RemoveStaleState removing state" podUID="215566c9-7640-4066-a908-52d8ee593c22" containerName="cilium-agent" Jul 2 00:48:26.550899 kubelet[2779]: I0702 00:48:26.550530 2779 memory_manager.go:354] "RemoveStaleState removing state" podUID="639369f2-44d1-4d90-9c4a-080c7e8644a7" containerName="cilium-operator" Jul 2 00:48:26.568578 systemd[1]: Created slice kubepods-burstable-pod89c665ed_ea50_4f7f_ba60_fa899a2c45ca.slice. Jul 2 00:48:26.663087 kubelet[2779]: I0702 00:48:26.663037 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-ipsec-secrets\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.663345 kubelet[2779]: I0702 00:48:26.663313 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-host-proc-sys-kernel\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.663543 kubelet[2779]: I0702 00:48:26.663467 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-run\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.663750 kubelet[2779]: I0702 00:48:26.663724 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-hostproc\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.663919 kubelet[2779]: I0702 00:48:26.663890 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-clustermesh-secrets\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.664107 kubelet[2779]: I0702 00:48:26.664080 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-config-path\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.664307 kubelet[2779]: I0702 00:48:26.664280 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-host-proc-sys-net\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.664501 kubelet[2779]: I0702 00:48:26.664471 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kkcwc\" (UniqueName: \"kubernetes.io/projected/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-kube-api-access-kkcwc\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.664649 kubelet[2779]: I0702 00:48:26.664623 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cni-path\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.664792 kubelet[2779]: I0702 00:48:26.664766 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-hubble-tls\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.664954 kubelet[2779]: I0702 00:48:26.664923 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-bpf-maps\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.665109 kubelet[2779]: I0702 00:48:26.665083 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-cgroup\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.665277 kubelet[2779]: I0702 00:48:26.665251 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-lib-modules\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.665444 kubelet[2779]: I0702 00:48:26.665414 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-xtables-lock\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.665608 kubelet[2779]: I0702 00:48:26.665583 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-etc-cni-netd\") pod \"cilium-f97f9\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " pod="kube-system/cilium-f97f9" Jul 2 00:48:26.729595 sshd[4523]: Accepted publickey for core from 139.178.89.65 port 47010 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:48:26.732630 sshd[4523]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:26.741616 systemd-logind[1738]: New session 26 of user core. Jul 2 00:48:26.741852 systemd[1]: Started session-26.scope. Jul 2 00:48:26.875724 env[1747]: time="2024-07-02T00:48:26.875026570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f97f9,Uid:89c665ed-ea50-4f7f-ba60-fa899a2c45ca,Namespace:kube-system,Attempt:0,}" Jul 2 00:48:26.920851 env[1747]: time="2024-07-02T00:48:26.920736848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:48:26.921273 env[1747]: time="2024-07-02T00:48:26.921136241Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:48:26.921465 env[1747]: time="2024-07-02T00:48:26.921405839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:48:26.921975 env[1747]: time="2024-07-02T00:48:26.921885995Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91 pid=4545 runtime=io.containerd.runc.v2 Jul 2 00:48:26.956890 systemd[1]: Started cri-containerd-9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91.scope. Jul 2 00:48:27.058675 env[1747]: time="2024-07-02T00:48:27.058604921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-f97f9,Uid:89c665ed-ea50-4f7f-ba60-fa899a2c45ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91\"" Jul 2 00:48:27.063970 env[1747]: time="2024-07-02T00:48:27.063910104Z" level=info msg="CreateContainer within sandbox \"9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:48:27.094524 sshd[4523]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:27.100535 env[1747]: time="2024-07-02T00:48:27.100375277Z" level=info msg="CreateContainer within sandbox \"9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc\"" Jul 2 00:48:27.102389 systemd[1]: sshd@25-172.31.20.46:22-139.178.89.65:47010.service: Deactivated successfully. Jul 2 00:48:27.103678 systemd[1]: session-26.scope: Deactivated successfully. Jul 2 00:48:27.105046 systemd-logind[1738]: Session 26 logged out. Waiting for processes to exit. Jul 2 00:48:27.108743 systemd-logind[1738]: Removed session 26. Jul 2 00:48:27.109793 env[1747]: time="2024-07-02T00:48:27.109742461Z" level=info msg="StartContainer for \"9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc\"" Jul 2 00:48:27.124019 systemd[1]: Started sshd@26-172.31.20.46:22-139.178.89.65:47020.service. Jul 2 00:48:27.159481 systemd[1]: Started cri-containerd-9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc.scope. Jul 2 00:48:27.182308 systemd[1]: cri-containerd-9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc.scope: Deactivated successfully. Jul 2 00:48:27.212927 env[1747]: time="2024-07-02T00:48:27.212840267Z" level=info msg="shim disconnected" id=9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc Jul 2 00:48:27.213317 env[1747]: time="2024-07-02T00:48:27.212925109Z" level=warning msg="cleaning up after shim disconnected" id=9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc namespace=k8s.io Jul 2 00:48:27.213317 env[1747]: time="2024-07-02T00:48:27.212949733Z" level=info msg="cleaning up dead shim" Jul 2 00:48:27.228590 env[1747]: time="2024-07-02T00:48:27.228499548Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:48:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4605 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T00:48:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 00:48:27.229284 env[1747]: time="2024-07-02T00:48:27.229103498Z" level=error msg="copy shim log" error="read /proc/self/fd/41: file already closed" Jul 2 00:48:27.229606 env[1747]: time="2024-07-02T00:48:27.229549885Z" level=error msg="Failed to pipe stderr of container \"9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc\"" error="reading from a closed fifo" Jul 2 00:48:27.229732 env[1747]: time="2024-07-02T00:48:27.229544989Z" level=error msg="Failed to pipe stdout of container \"9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc\"" error="reading from a closed fifo" Jul 2 00:48:27.232512 env[1747]: time="2024-07-02T00:48:27.232416333Z" level=error msg="StartContainer for \"9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 00:48:27.233211 kubelet[2779]: E0702 00:48:27.232842 2779 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc" Jul 2 00:48:27.233742 kubelet[2779]: E0702 00:48:27.233616 2779 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 00:48:27.233742 kubelet[2779]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 00:48:27.233742 kubelet[2779]: rm /hostbin/cilium-mount Jul 2 00:48:27.233966 kubelet[2779]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-kkcwc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-f97f9_kube-system(89c665ed-ea50-4f7f-ba60-fa899a2c45ca): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 00:48:27.233966 kubelet[2779]: E0702 00:48:27.233680 2779 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-f97f9" podUID="89c665ed-ea50-4f7f-ba60-fa899a2c45ca" Jul 2 00:48:27.324680 sshd[4587]: Accepted publickey for core from 139.178.89.65 port 47020 ssh2: RSA SHA256:8y6JErBds/WgSuzw1b/2wKJnltsiajeNUW/adFCuF/s Jul 2 00:48:27.328002 sshd[4587]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 00:48:27.338202 systemd[1]: Started session-27.scope. Jul 2 00:48:27.339811 systemd-logind[1738]: New session 27 of user core. Jul 2 00:48:27.727752 env[1747]: time="2024-07-02T00:48:27.727443316Z" level=info msg="StopPodSandbox for \"9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91\"" Jul 2 00:48:27.727752 env[1747]: time="2024-07-02T00:48:27.727542534Z" level=info msg="Container to stop \"9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 00:48:27.743282 systemd[1]: cri-containerd-9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91.scope: Deactivated successfully. Jul 2 00:48:27.784722 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91-shm.mount: Deactivated successfully. Jul 2 00:48:27.793471 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91-rootfs.mount: Deactivated successfully. Jul 2 00:48:27.815157 env[1747]: time="2024-07-02T00:48:27.815084813Z" level=info msg="shim disconnected" id=9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91 Jul 2 00:48:27.815157 env[1747]: time="2024-07-02T00:48:27.815154931Z" level=warning msg="cleaning up after shim disconnected" id=9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91 namespace=k8s.io Jul 2 00:48:27.815535 env[1747]: time="2024-07-02T00:48:27.815208140Z" level=info msg="cleaning up dead shim" Jul 2 00:48:27.830385 env[1747]: time="2024-07-02T00:48:27.830299040Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:48:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4643 runtime=io.containerd.runc.v2\n" Jul 2 00:48:27.831107 env[1747]: time="2024-07-02T00:48:27.830950236Z" level=info msg="TearDown network for sandbox \"9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91\" successfully" Jul 2 00:48:27.831107 env[1747]: time="2024-07-02T00:48:27.831012589Z" level=info msg="StopPodSandbox for \"9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91\" returns successfully" Jul 2 00:48:27.877998 kubelet[2779]: I0702 00:48:27.877937 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-ipsec-secrets\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878020 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-config-path\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878058 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-bpf-maps\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878096 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-run\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878134 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-hubble-tls\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878197 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-hostproc\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878251 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kkcwc\" (UniqueName: \"kubernetes.io/projected/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-kube-api-access-kkcwc\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878288 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-lib-modules\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878336 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-host-proc-sys-kernel\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878376 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-clustermesh-secrets\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878410 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cni-path\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878441 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-xtables-lock\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878478 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-cgroup\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878511 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-etc-cni-netd\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878548 2779 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-host-proc-sys-net\") pod \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\" (UID: \"89c665ed-ea50-4f7f-ba60-fa899a2c45ca\") " Jul 2 00:48:27.878753 kubelet[2779]: I0702 00:48:27.878652 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:27.880509 kubelet[2779]: I0702 00:48:27.880410 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:27.880709 kubelet[2779]: I0702 00:48:27.880529 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:27.888845 kubelet[2779]: I0702 00:48:27.887347 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:27.888845 kubelet[2779]: I0702 00:48:27.887391 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:27.888845 kubelet[2779]: I0702 00:48:27.888516 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cni-path" (OuterVolumeSpecName: "cni-path") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:27.888845 kubelet[2779]: I0702 00:48:27.888555 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:27.888845 kubelet[2779]: I0702 00:48:27.888588 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-hostproc" (OuterVolumeSpecName: "hostproc") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:27.888845 kubelet[2779]: I0702 00:48:27.888591 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:27.888845 kubelet[2779]: I0702 00:48:27.888622 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 00:48:27.890390 kubelet[2779]: I0702 00:48:27.890331 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 00:48:27.894737 systemd[1]: var-lib-kubelet-pods-89c665ed\x2dea50\x2d4f7f\x2dba60\x2dfa899a2c45ca-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 00:48:27.912521 kubelet[2779]: I0702 00:48:27.906434 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:48:27.912521 kubelet[2779]: I0702 00:48:27.907264 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 00:48:27.912583 systemd[1]: var-lib-kubelet-pods-89c665ed\x2dea50\x2d4f7f\x2dba60\x2dfa899a2c45ca-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkkcwc.mount: Deactivated successfully. Jul 2 00:48:27.912767 systemd[1]: var-lib-kubelet-pods-89c665ed\x2dea50\x2d4f7f\x2dba60\x2dfa899a2c45ca-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 00:48:27.912905 systemd[1]: var-lib-kubelet-pods-89c665ed\x2dea50\x2d4f7f\x2dba60\x2dfa899a2c45ca-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 00:48:27.913298 kubelet[2779]: I0702 00:48:27.913243 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-kube-api-access-kkcwc" (OuterVolumeSpecName: "kube-api-access-kkcwc") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "kube-api-access-kkcwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:48:27.920478 kubelet[2779]: I0702 00:48:27.920422 2779 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "89c665ed-ea50-4f7f-ba60-fa899a2c45ca" (UID: "89c665ed-ea50-4f7f-ba60-fa899a2c45ca"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 00:48:27.979095 kubelet[2779]: I0702 00:48:27.978890 2779 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-run\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.979095 kubelet[2779]: I0702 00:48:27.979003 2779 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-hubble-tls\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.979095 kubelet[2779]: I0702 00:48:27.979042 2779 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-hostproc\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.980248 kubelet[2779]: I0702 00:48:27.979067 2779 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-kkcwc\" (UniqueName: \"kubernetes.io/projected/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-kube-api-access-kkcwc\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.980453 kubelet[2779]: I0702 00:48:27.980428 2779 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-lib-modules\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.980719 kubelet[2779]: I0702 00:48:27.980693 2779 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-host-proc-sys-kernel\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.980883 kubelet[2779]: I0702 00:48:27.980861 2779 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-clustermesh-secrets\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.981032 kubelet[2779]: I0702 00:48:27.981011 2779 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cni-path\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.981206 kubelet[2779]: I0702 00:48:27.981151 2779 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-xtables-lock\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.981344 kubelet[2779]: I0702 00:48:27.981322 2779 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-cgroup\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.981486 kubelet[2779]: I0702 00:48:27.981465 2779 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-etc-cni-netd\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.981633 kubelet[2779]: I0702 00:48:27.981611 2779 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-host-proc-sys-net\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.981768 kubelet[2779]: I0702 00:48:27.981746 2779 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-ipsec-secrets\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.981906 kubelet[2779]: I0702 00:48:27.981885 2779 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-cilium-config-path\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:27.982039 kubelet[2779]: I0702 00:48:27.982018 2779 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/89c665ed-ea50-4f7f-ba60-fa899a2c45ca-bpf-maps\") on node \"ip-172-31-20-46\" DevicePath \"\"" Jul 2 00:48:28.308871 systemd[1]: Removed slice kubepods-burstable-pod89c665ed_ea50_4f7f_ba60_fa899a2c45ca.slice. Jul 2 00:48:28.731382 kubelet[2779]: I0702 00:48:28.731344 2779 scope.go:117] "RemoveContainer" containerID="9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc" Jul 2 00:48:28.736408 env[1747]: time="2024-07-02T00:48:28.736291886Z" level=info msg="RemoveContainer for \"9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc\"" Jul 2 00:48:28.742586 env[1747]: time="2024-07-02T00:48:28.742507508Z" level=info msg="RemoveContainer for \"9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc\" returns successfully" Jul 2 00:48:28.796432 kubelet[2779]: I0702 00:48:28.796343 2779 topology_manager.go:215] "Topology Admit Handler" podUID="c49b1b7f-afad-46a2-bfbd-419ce0b94f0a" podNamespace="kube-system" podName="cilium-bkt6h" Jul 2 00:48:28.796638 kubelet[2779]: E0702 00:48:28.796489 2779 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="89c665ed-ea50-4f7f-ba60-fa899a2c45ca" containerName="mount-cgroup" Jul 2 00:48:28.796638 kubelet[2779]: I0702 00:48:28.796561 2779 memory_manager.go:354] "RemoveStaleState removing state" podUID="89c665ed-ea50-4f7f-ba60-fa899a2c45ca" containerName="mount-cgroup" Jul 2 00:48:28.807765 systemd[1]: Created slice kubepods-burstable-podc49b1b7f_afad_46a2_bfbd_419ce0b94f0a.slice. Jul 2 00:48:28.887743 kubelet[2779]: I0702 00:48:28.887700 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-cilium-cgroup\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.888550 kubelet[2779]: I0702 00:48:28.888501 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-lib-modules\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.888745 kubelet[2779]: I0702 00:48:28.888719 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6brt\" (UniqueName: \"kubernetes.io/projected/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-kube-api-access-t6brt\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.888907 kubelet[2779]: I0702 00:48:28.888880 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-cilium-run\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.889072 kubelet[2779]: I0702 00:48:28.889046 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-bpf-maps\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.889291 kubelet[2779]: I0702 00:48:28.889264 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-host-proc-sys-kernel\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.889468 kubelet[2779]: I0702 00:48:28.889435 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-clustermesh-secrets\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.889619 kubelet[2779]: I0702 00:48:28.889594 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-cilium-ipsec-secrets\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.889767 kubelet[2779]: I0702 00:48:28.889742 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-hubble-tls\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.889934 kubelet[2779]: I0702 00:48:28.889905 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-cni-path\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.890154 kubelet[2779]: I0702 00:48:28.890125 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-etc-cni-netd\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.890331 kubelet[2779]: I0702 00:48:28.890306 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-xtables-lock\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.890474 kubelet[2779]: I0702 00:48:28.890449 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-hostproc\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.890632 kubelet[2779]: I0702 00:48:28.890607 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-cilium-config-path\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:28.890798 kubelet[2779]: I0702 00:48:28.890772 2779 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c49b1b7f-afad-46a2-bfbd-419ce0b94f0a-host-proc-sys-net\") pod \"cilium-bkt6h\" (UID: \"c49b1b7f-afad-46a2-bfbd-419ce0b94f0a\") " pod="kube-system/cilium-bkt6h" Jul 2 00:48:29.114214 env[1747]: time="2024-07-02T00:48:29.114029865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bkt6h,Uid:c49b1b7f-afad-46a2-bfbd-419ce0b94f0a,Namespace:kube-system,Attempt:0,}" Jul 2 00:48:29.144929 env[1747]: time="2024-07-02T00:48:29.144569145Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 00:48:29.144929 env[1747]: time="2024-07-02T00:48:29.144651923Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 00:48:29.144929 env[1747]: time="2024-07-02T00:48:29.144680003Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 00:48:29.145485 env[1747]: time="2024-07-02T00:48:29.145378288Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6520d5c20c0aaddc08e220c405276e862374739a1e50e11340c77aa3ee251a69 pid=4672 runtime=io.containerd.runc.v2 Jul 2 00:48:29.175626 systemd[1]: Started cri-containerd-6520d5c20c0aaddc08e220c405276e862374739a1e50e11340c77aa3ee251a69.scope. Jul 2 00:48:29.232594 env[1747]: time="2024-07-02T00:48:29.232476613Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bkt6h,Uid:c49b1b7f-afad-46a2-bfbd-419ce0b94f0a,Namespace:kube-system,Attempt:0,} returns sandbox id \"6520d5c20c0aaddc08e220c405276e862374739a1e50e11340c77aa3ee251a69\"" Jul 2 00:48:29.243217 env[1747]: time="2024-07-02T00:48:29.243122312Z" level=info msg="CreateContainer within sandbox \"6520d5c20c0aaddc08e220c405276e862374739a1e50e11340c77aa3ee251a69\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 00:48:29.272711 env[1747]: time="2024-07-02T00:48:29.272645767Z" level=info msg="CreateContainer within sandbox \"6520d5c20c0aaddc08e220c405276e862374739a1e50e11340c77aa3ee251a69\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"73de1374bb0bf86d0806f6a71be7e61edd635fba54d9463979f5b1503664b380\"" Jul 2 00:48:29.274071 env[1747]: time="2024-07-02T00:48:29.273993232Z" level=info msg="StartContainer for \"73de1374bb0bf86d0806f6a71be7e61edd635fba54d9463979f5b1503664b380\"" Jul 2 00:48:29.308267 systemd[1]: Started cri-containerd-73de1374bb0bf86d0806f6a71be7e61edd635fba54d9463979f5b1503664b380.scope. Jul 2 00:48:29.375348 env[1747]: time="2024-07-02T00:48:29.375284698Z" level=info msg="StartContainer for \"73de1374bb0bf86d0806f6a71be7e61edd635fba54d9463979f5b1503664b380\" returns successfully" Jul 2 00:48:29.390591 systemd[1]: cri-containerd-73de1374bb0bf86d0806f6a71be7e61edd635fba54d9463979f5b1503664b380.scope: Deactivated successfully. Jul 2 00:48:29.450442 env[1747]: time="2024-07-02T00:48:29.450372518Z" level=info msg="shim disconnected" id=73de1374bb0bf86d0806f6a71be7e61edd635fba54d9463979f5b1503664b380 Jul 2 00:48:29.450727 env[1747]: time="2024-07-02T00:48:29.450450004Z" level=warning msg="cleaning up after shim disconnected" id=73de1374bb0bf86d0806f6a71be7e61edd635fba54d9463979f5b1503664b380 namespace=k8s.io Jul 2 00:48:29.450727 env[1747]: time="2024-07-02T00:48:29.450473537Z" level=info msg="cleaning up dead shim" Jul 2 00:48:29.467472 env[1747]: time="2024-07-02T00:48:29.467406237Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:48:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4755 runtime=io.containerd.runc.v2\n" Jul 2 00:48:29.746700 env[1747]: time="2024-07-02T00:48:29.746560814Z" level=info msg="CreateContainer within sandbox \"6520d5c20c0aaddc08e220c405276e862374739a1e50e11340c77aa3ee251a69\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 00:48:29.774753 env[1747]: time="2024-07-02T00:48:29.774689571Z" level=info msg="CreateContainer within sandbox \"6520d5c20c0aaddc08e220c405276e862374739a1e50e11340c77aa3ee251a69\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1d702be8fb72d463fbc3ab4906fb4ecaa0199b64a9faa77bd9933d747e5cb4d1\"" Jul 2 00:48:29.776798 env[1747]: time="2024-07-02T00:48:29.776733761Z" level=info msg="StartContainer for \"1d702be8fb72d463fbc3ab4906fb4ecaa0199b64a9faa77bd9933d747e5cb4d1\"" Jul 2 00:48:29.821953 systemd[1]: Started cri-containerd-1d702be8fb72d463fbc3ab4906fb4ecaa0199b64a9faa77bd9933d747e5cb4d1.scope. Jul 2 00:48:29.898884 env[1747]: time="2024-07-02T00:48:29.898736987Z" level=info msg="StartContainer for \"1d702be8fb72d463fbc3ab4906fb4ecaa0199b64a9faa77bd9933d747e5cb4d1\" returns successfully" Jul 2 00:48:29.934504 systemd[1]: cri-containerd-1d702be8fb72d463fbc3ab4906fb4ecaa0199b64a9faa77bd9933d747e5cb4d1.scope: Deactivated successfully. Jul 2 00:48:30.003492 env[1747]: time="2024-07-02T00:48:30.003341675Z" level=info msg="shim disconnected" id=1d702be8fb72d463fbc3ab4906fb4ecaa0199b64a9faa77bd9933d747e5cb4d1 Jul 2 00:48:30.004263 env[1747]: time="2024-07-02T00:48:30.004156675Z" level=warning msg="cleaning up after shim disconnected" id=1d702be8fb72d463fbc3ab4906fb4ecaa0199b64a9faa77bd9933d747e5cb4d1 namespace=k8s.io Jul 2 00:48:30.004447 env[1747]: time="2024-07-02T00:48:30.004418809Z" level=info msg="cleaning up dead shim" Jul 2 00:48:30.018604 env[1747]: time="2024-07-02T00:48:30.018547693Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:48:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4820 runtime=io.containerd.runc.v2\n" Jul 2 00:48:30.216995 env[1747]: time="2024-07-02T00:48:30.216943771Z" level=info msg="StopPodSandbox for \"9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91\"" Jul 2 00:48:30.217399 env[1747]: time="2024-07-02T00:48:30.217331189Z" level=info msg="TearDown network for sandbox \"9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91\" successfully" Jul 2 00:48:30.217543 env[1747]: time="2024-07-02T00:48:30.217510065Z" level=info msg="StopPodSandbox for \"9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91\" returns successfully" Jul 2 00:48:30.218338 env[1747]: time="2024-07-02T00:48:30.218285608Z" level=info msg="RemovePodSandbox for \"9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91\"" Jul 2 00:48:30.218484 env[1747]: time="2024-07-02T00:48:30.218346642Z" level=info msg="Forcibly stopping sandbox \"9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91\"" Jul 2 00:48:30.218547 env[1747]: time="2024-07-02T00:48:30.218479077Z" level=info msg="TearDown network for sandbox \"9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91\" successfully" Jul 2 00:48:30.224419 env[1747]: time="2024-07-02T00:48:30.224357961Z" level=info msg="RemovePodSandbox \"9c5fca3815ec45781333fb7b52dbada329d4238424fe5720dd2ea44786a6bc91\" returns successfully" Jul 2 00:48:30.225251 env[1747]: time="2024-07-02T00:48:30.225191154Z" level=info msg="StopPodSandbox for \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\"" Jul 2 00:48:30.225464 env[1747]: time="2024-07-02T00:48:30.225331989Z" level=info msg="TearDown network for sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" successfully" Jul 2 00:48:30.225573 env[1747]: time="2024-07-02T00:48:30.225458376Z" level=info msg="StopPodSandbox for \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" returns successfully" Jul 2 00:48:30.226238 env[1747]: time="2024-07-02T00:48:30.226142117Z" level=info msg="RemovePodSandbox for \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\"" Jul 2 00:48:30.226401 env[1747]: time="2024-07-02T00:48:30.226242128Z" level=info msg="Forcibly stopping sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\"" Jul 2 00:48:30.226401 env[1747]: time="2024-07-02T00:48:30.226365479Z" level=info msg="TearDown network for sandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" successfully" Jul 2 00:48:30.231817 env[1747]: time="2024-07-02T00:48:30.231740015Z" level=info msg="RemovePodSandbox \"c3160aabe65cceab6dc7b6fac7be461c6a71988f50c8a5d50ace2cfadf416329\" returns successfully" Jul 2 00:48:30.232912 env[1747]: time="2024-07-02T00:48:30.232868223Z" level=info msg="StopPodSandbox for \"77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165\"" Jul 2 00:48:30.233253 env[1747]: time="2024-07-02T00:48:30.233186482Z" level=info msg="TearDown network for sandbox \"77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165\" successfully" Jul 2 00:48:30.233387 env[1747]: time="2024-07-02T00:48:30.233354439Z" level=info msg="StopPodSandbox for \"77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165\" returns successfully" Jul 2 00:48:30.234043 env[1747]: time="2024-07-02T00:48:30.233989446Z" level=info msg="RemovePodSandbox for \"77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165\"" Jul 2 00:48:30.234191 env[1747]: time="2024-07-02T00:48:30.234045452Z" level=info msg="Forcibly stopping sandbox \"77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165\"" Jul 2 00:48:30.234278 env[1747]: time="2024-07-02T00:48:30.234186347Z" level=info msg="TearDown network for sandbox \"77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165\" successfully" Jul 2 00:48:30.240103 env[1747]: time="2024-07-02T00:48:30.239986582Z" level=info msg="RemovePodSandbox \"77a807afae0ce71c9e6f11909a2a0d08fcfdc8e1f15632874a2ec1d47e1e6165\" returns successfully" Jul 2 00:48:30.301584 kubelet[2779]: I0702 00:48:30.301455 2779 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="89c665ed-ea50-4f7f-ba60-fa899a2c45ca" path="/var/lib/kubelet/pods/89c665ed-ea50-4f7f-ba60-fa899a2c45ca/volumes" Jul 2 00:48:30.321754 kubelet[2779]: W0702 00:48:30.321665 2779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89c665ed_ea50_4f7f_ba60_fa899a2c45ca.slice/cri-containerd-9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc.scope WatchSource:0}: container "9cff684a83ffda31b7de216ea6f23bd4d0fc4aef6c712b4dcad39001ccb2bedc" in namespace "k8s.io": not found Jul 2 00:48:30.420929 kubelet[2779]: E0702 00:48:30.420821 2779 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 00:48:30.753224 env[1747]: time="2024-07-02T00:48:30.752438663Z" level=info msg="CreateContainer within sandbox \"6520d5c20c0aaddc08e220c405276e862374739a1e50e11340c77aa3ee251a69\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 00:48:30.785878 env[1747]: time="2024-07-02T00:48:30.785773070Z" level=info msg="CreateContainer within sandbox \"6520d5c20c0aaddc08e220c405276e862374739a1e50e11340c77aa3ee251a69\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"61cd3014952594af176dc797f9186fc0180c4ea695d23c77b91f82dc934ced42\"" Jul 2 00:48:30.790294 env[1747]: time="2024-07-02T00:48:30.790221772Z" level=info msg="StartContainer for \"61cd3014952594af176dc797f9186fc0180c4ea695d23c77b91f82dc934ced42\"" Jul 2 00:48:30.845940 systemd[1]: Started cri-containerd-61cd3014952594af176dc797f9186fc0180c4ea695d23c77b91f82dc934ced42.scope. Jul 2 00:48:30.925489 env[1747]: time="2024-07-02T00:48:30.925425784Z" level=info msg="StartContainer for \"61cd3014952594af176dc797f9186fc0180c4ea695d23c77b91f82dc934ced42\" returns successfully" Jul 2 00:48:30.932197 systemd[1]: cri-containerd-61cd3014952594af176dc797f9186fc0180c4ea695d23c77b91f82dc934ced42.scope: Deactivated successfully. Jul 2 00:48:30.985973 env[1747]: time="2024-07-02T00:48:30.985896027Z" level=info msg="shim disconnected" id=61cd3014952594af176dc797f9186fc0180c4ea695d23c77b91f82dc934ced42 Jul 2 00:48:30.985973 env[1747]: time="2024-07-02T00:48:30.985965737Z" level=warning msg="cleaning up after shim disconnected" id=61cd3014952594af176dc797f9186fc0180c4ea695d23c77b91f82dc934ced42 namespace=k8s.io Jul 2 00:48:30.986427 env[1747]: time="2024-07-02T00:48:30.985988777Z" level=info msg="cleaning up dead shim" Jul 2 00:48:31.000657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61cd3014952594af176dc797f9186fc0180c4ea695d23c77b91f82dc934ced42-rootfs.mount: Deactivated successfully. Jul 2 00:48:31.004206 env[1747]: time="2024-07-02T00:48:31.004038062Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:48:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4880 runtime=io.containerd.runc.v2\n" Jul 2 00:48:31.758202 env[1747]: time="2024-07-02T00:48:31.758097012Z" level=info msg="CreateContainer within sandbox \"6520d5c20c0aaddc08e220c405276e862374739a1e50e11340c77aa3ee251a69\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 00:48:31.785532 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540393985.mount: Deactivated successfully. Jul 2 00:48:31.792380 env[1747]: time="2024-07-02T00:48:31.792315910Z" level=info msg="CreateContainer within sandbox \"6520d5c20c0aaddc08e220c405276e862374739a1e50e11340c77aa3ee251a69\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7d66375947083ca057d7a7917685ab9ffd3b87631d74ef942cd5adcdbb8f5ef2\"" Jul 2 00:48:31.796234 env[1747]: time="2024-07-02T00:48:31.793762893Z" level=info msg="StartContainer for \"7d66375947083ca057d7a7917685ab9ffd3b87631d74ef942cd5adcdbb8f5ef2\"" Jul 2 00:48:31.850304 systemd[1]: Started cri-containerd-7d66375947083ca057d7a7917685ab9ffd3b87631d74ef942cd5adcdbb8f5ef2.scope. Jul 2 00:48:31.923871 systemd[1]: cri-containerd-7d66375947083ca057d7a7917685ab9ffd3b87631d74ef942cd5adcdbb8f5ef2.scope: Deactivated successfully. Jul 2 00:48:31.927931 env[1747]: time="2024-07-02T00:48:31.927807949Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc49b1b7f_afad_46a2_bfbd_419ce0b94f0a.slice/cri-containerd-7d66375947083ca057d7a7917685ab9ffd3b87631d74ef942cd5adcdbb8f5ef2.scope/memory.events\": no such file or directory" Jul 2 00:48:31.932384 env[1747]: time="2024-07-02T00:48:31.932299732Z" level=info msg="StartContainer for \"7d66375947083ca057d7a7917685ab9ffd3b87631d74ef942cd5adcdbb8f5ef2\" returns successfully" Jul 2 00:48:31.976253 env[1747]: time="2024-07-02T00:48:31.976128672Z" level=info msg="shim disconnected" id=7d66375947083ca057d7a7917685ab9ffd3b87631d74ef942cd5adcdbb8f5ef2 Jul 2 00:48:31.976253 env[1747]: time="2024-07-02T00:48:31.976245987Z" level=warning msg="cleaning up after shim disconnected" id=7d66375947083ca057d7a7917685ab9ffd3b87631d74ef942cd5adcdbb8f5ef2 namespace=k8s.io Jul 2 00:48:31.976597 env[1747]: time="2024-07-02T00:48:31.976269568Z" level=info msg="cleaning up dead shim" Jul 2 00:48:31.991720 env[1747]: time="2024-07-02T00:48:31.991644625Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:48:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4937 runtime=io.containerd.runc.v2\n" Jul 2 00:48:32.000721 systemd[1]: run-containerd-runc-k8s.io-7d66375947083ca057d7a7917685ab9ffd3b87631d74ef942cd5adcdbb8f5ef2-runc.ZGKlsL.mount: Deactivated successfully. Jul 2 00:48:32.000901 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d66375947083ca057d7a7917685ab9ffd3b87631d74ef942cd5adcdbb8f5ef2-rootfs.mount: Deactivated successfully. Jul 2 00:48:32.640323 kubelet[2779]: I0702 00:48:32.640264 2779 setters.go:580] "Node became not ready" node="ip-172-31-20-46" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T00:48:32Z","lastTransitionTime":"2024-07-02T00:48:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 00:48:32.763590 env[1747]: time="2024-07-02T00:48:32.763506070Z" level=info msg="CreateContainer within sandbox \"6520d5c20c0aaddc08e220c405276e862374739a1e50e11340c77aa3ee251a69\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 00:48:32.793249 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903285752.mount: Deactivated successfully. Jul 2 00:48:32.806710 env[1747]: time="2024-07-02T00:48:32.806616934Z" level=info msg="CreateContainer within sandbox \"6520d5c20c0aaddc08e220c405276e862374739a1e50e11340c77aa3ee251a69\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"261687291dd22e04fd906bf63ce3af266eb7277d07559b0fa387e4657e383cd0\"" Jul 2 00:48:32.808115 env[1747]: time="2024-07-02T00:48:32.808062070Z" level=info msg="StartContainer for \"261687291dd22e04fd906bf63ce3af266eb7277d07559b0fa387e4657e383cd0\"" Jul 2 00:48:32.854714 systemd[1]: Started cri-containerd-261687291dd22e04fd906bf63ce3af266eb7277d07559b0fa387e4657e383cd0.scope. Jul 2 00:48:32.936481 env[1747]: time="2024-07-02T00:48:32.936341336Z" level=info msg="StartContainer for \"261687291dd22e04fd906bf63ce3af266eb7277d07559b0fa387e4657e383cd0\" returns successfully" Jul 2 00:48:33.451512 kubelet[2779]: W0702 00:48:33.451460 2779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc49b1b7f_afad_46a2_bfbd_419ce0b94f0a.slice/cri-containerd-73de1374bb0bf86d0806f6a71be7e61edd635fba54d9463979f5b1503664b380.scope WatchSource:0}: task 73de1374bb0bf86d0806f6a71be7e61edd635fba54d9463979f5b1503664b380 not found: not found Jul 2 00:48:33.835410 kubelet[2779]: I0702 00:48:33.835011 2779 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bkt6h" podStartSLOduration=5.834986957 podStartE2EDuration="5.834986957s" podCreationTimestamp="2024-07-02 00:48:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 00:48:33.834309192 +0000 UTC m=+123.940523298" watchObservedRunningTime="2024-07-02 00:48:33.834986957 +0000 UTC m=+123.941201087" Jul 2 00:48:33.977255 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Jul 2 00:48:36.091991 systemd[1]: run-containerd-runc-k8s.io-261687291dd22e04fd906bf63ce3af266eb7277d07559b0fa387e4657e383cd0-runc.WQbXSB.mount: Deactivated successfully. Jul 2 00:48:36.565472 kubelet[2779]: W0702 00:48:36.565420 2779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc49b1b7f_afad_46a2_bfbd_419ce0b94f0a.slice/cri-containerd-1d702be8fb72d463fbc3ab4906fb4ecaa0199b64a9faa77bd9933d747e5cb4d1.scope WatchSource:0}: task 1d702be8fb72d463fbc3ab4906fb4ecaa0199b64a9faa77bd9933d747e5cb4d1 not found: not found Jul 2 00:48:37.966581 (udev-worker)[5483]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:48:37.968993 (udev-worker)[5484]: Network interface NamePolicy= disabled on kernel command line. Jul 2 00:48:37.976135 systemd-networkd[1462]: lxc_health: Link UP Jul 2 00:48:38.008018 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 00:48:38.006926 systemd-networkd[1462]: lxc_health: Gained carrier Jul 2 00:48:38.376890 systemd[1]: run-containerd-runc-k8s.io-261687291dd22e04fd906bf63ce3af266eb7277d07559b0fa387e4657e383cd0-runc.KbX60E.mount: Deactivated successfully. Jul 2 00:48:39.519442 systemd-networkd[1462]: lxc_health: Gained IPv6LL Jul 2 00:48:39.675709 kubelet[2779]: W0702 00:48:39.675656 2779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc49b1b7f_afad_46a2_bfbd_419ce0b94f0a.slice/cri-containerd-61cd3014952594af176dc797f9186fc0180c4ea695d23c77b91f82dc934ced42.scope WatchSource:0}: task 61cd3014952594af176dc797f9186fc0180c4ea695d23c77b91f82dc934ced42 not found: not found Jul 2 00:48:40.710319 systemd[1]: run-containerd-runc-k8s.io-261687291dd22e04fd906bf63ce3af266eb7277d07559b0fa387e4657e383cd0-runc.JAM2F0.mount: Deactivated successfully. Jul 2 00:48:42.793914 kubelet[2779]: W0702 00:48:42.793830 2779 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc49b1b7f_afad_46a2_bfbd_419ce0b94f0a.slice/cri-containerd-7d66375947083ca057d7a7917685ab9ffd3b87631d74ef942cd5adcdbb8f5ef2.scope WatchSource:0}: task 7d66375947083ca057d7a7917685ab9ffd3b87631d74ef942cd5adcdbb8f5ef2 not found: not found Jul 2 00:48:43.189068 sshd[4587]: pam_unix(sshd:session): session closed for user core Jul 2 00:48:43.195286 systemd[1]: session-27.scope: Deactivated successfully. Jul 2 00:48:43.196603 systemd[1]: sshd@26-172.31.20.46:22-139.178.89.65:47020.service: Deactivated successfully. Jul 2 00:48:43.198456 systemd-logind[1738]: Session 27 logged out. Waiting for processes to exit. Jul 2 00:48:43.201621 systemd-logind[1738]: Removed session 27. Jul 2 00:48:57.082743 systemd[1]: cri-containerd-fe22b334435bbac201434a12eff23d8a1392417198bd4620d85a2330c8265e81.scope: Deactivated successfully. Jul 2 00:48:57.083362 systemd[1]: cri-containerd-fe22b334435bbac201434a12eff23d8a1392417198bd4620d85a2330c8265e81.scope: Consumed 4.870s CPU time. Jul 2 00:48:57.120800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fe22b334435bbac201434a12eff23d8a1392417198bd4620d85a2330c8265e81-rootfs.mount: Deactivated successfully. Jul 2 00:48:57.142926 env[1747]: time="2024-07-02T00:48:57.142826010Z" level=info msg="shim disconnected" id=fe22b334435bbac201434a12eff23d8a1392417198bd4620d85a2330c8265e81 Jul 2 00:48:57.142926 env[1747]: time="2024-07-02T00:48:57.142913228Z" level=warning msg="cleaning up after shim disconnected" id=fe22b334435bbac201434a12eff23d8a1392417198bd4620d85a2330c8265e81 namespace=k8s.io Jul 2 00:48:57.143714 env[1747]: time="2024-07-02T00:48:57.142936905Z" level=info msg="cleaning up dead shim" Jul 2 00:48:57.157748 env[1747]: time="2024-07-02T00:48:57.157666239Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:48:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5598 runtime=io.containerd.runc.v2\n" Jul 2 00:48:57.826440 kubelet[2779]: I0702 00:48:57.826393 2779 scope.go:117] "RemoveContainer" containerID="fe22b334435bbac201434a12eff23d8a1392417198bd4620d85a2330c8265e81" Jul 2 00:48:57.830639 env[1747]: time="2024-07-02T00:48:57.830585695Z" level=info msg="CreateContainer within sandbox \"2f673e0ae2b6904e3b92e319ff2780861418b8a8f22003c81dcf50e13730c2e2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 00:48:57.853089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount442113438.mount: Deactivated successfully. Jul 2 00:48:57.863094 env[1747]: time="2024-07-02T00:48:57.862981663Z" level=info msg="CreateContainer within sandbox \"2f673e0ae2b6904e3b92e319ff2780861418b8a8f22003c81dcf50e13730c2e2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"ecc76fee51ea9b0c0384837ec282aa3b1a8b4f0ba54dc339bb36cc297292c07a\"" Jul 2 00:48:57.864140 env[1747]: time="2024-07-02T00:48:57.864089067Z" level=info msg="StartContainer for \"ecc76fee51ea9b0c0384837ec282aa3b1a8b4f0ba54dc339bb36cc297292c07a\"" Jul 2 00:48:57.902139 systemd[1]: Started cri-containerd-ecc76fee51ea9b0c0384837ec282aa3b1a8b4f0ba54dc339bb36cc297292c07a.scope. Jul 2 00:48:57.988261 env[1747]: time="2024-07-02T00:48:57.988161312Z" level=info msg="StartContainer for \"ecc76fee51ea9b0c0384837ec282aa3b1a8b4f0ba54dc339bb36cc297292c07a\" returns successfully" Jul 2 00:49:02.586829 systemd[1]: cri-containerd-77773d50cd558e6b180260e393614686b69a98a36ead9113eae8addaa081332f.scope: Deactivated successfully. Jul 2 00:49:02.587402 systemd[1]: cri-containerd-77773d50cd558e6b180260e393614686b69a98a36ead9113eae8addaa081332f.scope: Consumed 3.480s CPU time. Jul 2 00:49:02.629124 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-77773d50cd558e6b180260e393614686b69a98a36ead9113eae8addaa081332f-rootfs.mount: Deactivated successfully. Jul 2 00:49:02.646355 env[1747]: time="2024-07-02T00:49:02.646270716Z" level=info msg="shim disconnected" id=77773d50cd558e6b180260e393614686b69a98a36ead9113eae8addaa081332f Jul 2 00:49:02.646355 env[1747]: time="2024-07-02T00:49:02.646349667Z" level=warning msg="cleaning up after shim disconnected" id=77773d50cd558e6b180260e393614686b69a98a36ead9113eae8addaa081332f namespace=k8s.io Jul 2 00:49:02.647728 env[1747]: time="2024-07-02T00:49:02.646374807Z" level=info msg="cleaning up dead shim" Jul 2 00:49:02.662233 env[1747]: time="2024-07-02T00:49:02.662124850Z" level=warning msg="cleanup warnings time=\"2024-07-02T00:49:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5661 runtime=io.containerd.runc.v2\n" Jul 2 00:49:02.853047 kubelet[2779]: I0702 00:49:02.852231 2779 scope.go:117] "RemoveContainer" containerID="77773d50cd558e6b180260e393614686b69a98a36ead9113eae8addaa081332f" Jul 2 00:49:02.856426 env[1747]: time="2024-07-02T00:49:02.856369045Z" level=info msg="CreateContainer within sandbox \"52e0c616c6e467133ee483cbea48ad31f8df0b7a57d85e52f9c7a18ca0e09285\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 00:49:02.886997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845705142.mount: Deactivated successfully. Jul 2 00:49:02.906601 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3622121881.mount: Deactivated successfully. Jul 2 00:49:02.910212 env[1747]: time="2024-07-02T00:49:02.910061045Z" level=info msg="CreateContainer within sandbox \"52e0c616c6e467133ee483cbea48ad31f8df0b7a57d85e52f9c7a18ca0e09285\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"1fed527849f7e820e08694090f9accc7e16c33b4ccb44e8bcca474fc3d1134f8\"" Jul 2 00:49:02.911131 env[1747]: time="2024-07-02T00:49:02.911087976Z" level=info msg="StartContainer for \"1fed527849f7e820e08694090f9accc7e16c33b4ccb44e8bcca474fc3d1134f8\"" Jul 2 00:49:02.942658 systemd[1]: Started cri-containerd-1fed527849f7e820e08694090f9accc7e16c33b4ccb44e8bcca474fc3d1134f8.scope. Jul 2 00:49:03.024547 env[1747]: time="2024-07-02T00:49:03.024482571Z" level=info msg="StartContainer for \"1fed527849f7e820e08694090f9accc7e16c33b4ccb44e8bcca474fc3d1134f8\" returns successfully" Jul 2 00:49:03.561608 kubelet[2779]: E0702 00:49:03.561538 2779 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-46?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 00:49:13.562912 kubelet[2779]: E0702 00:49:13.562239 2779 controller.go:195] "Failed to update lease" err="Put \"https://172.31.20.46:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-20-46?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"