Feb 9 09:47:00.942606 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 9 09:47:00.942642 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Feb 9 08:56:26 -00 2024 Feb 9 09:47:00.942664 kernel: efi: EFI v2.70 by EDK II Feb 9 09:47:00.942679 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 9 09:47:00.942693 kernel: ACPI: Early table checksum verification disabled Feb 9 09:47:00.942706 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 9 09:47:00.942722 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 9 09:47:00.942736 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 9 09:47:00.942750 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 9 09:47:00.942764 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 9 09:47:00.942781 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 9 09:47:00.942795 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 9 09:47:00.942809 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 9 09:47:00.942823 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 9 09:47:00.942839 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 9 09:47:00.942858 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 9 09:47:00.942872 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 9 09:47:00.942887 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 9 09:47:00.942901 kernel: printk: bootconsole [uart0] enabled Feb 9 09:47:00.942915 kernel: NUMA: Failed to initialise from firmware Feb 9 09:47:00.942930 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 09:47:00.942945 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 9 09:47:00.942959 kernel: Zone ranges: Feb 9 09:47:00.945050 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 9 09:47:00.945075 kernel: DMA32 empty Feb 9 09:47:00.945091 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 9 09:47:00.945113 kernel: Movable zone start for each node Feb 9 09:47:00.945128 kernel: Early memory node ranges Feb 9 09:47:00.945143 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 9 09:47:00.945157 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 9 09:47:00.945172 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 9 09:47:00.945186 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 9 09:47:00.945201 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 9 09:47:00.945215 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 9 09:47:00.945230 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 9 09:47:00.945244 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 9 09:47:00.945259 kernel: psci: probing for conduit method from ACPI. Feb 9 09:47:00.945273 kernel: psci: PSCIv1.0 detected in firmware. Feb 9 09:47:00.945292 kernel: psci: Using standard PSCI v0.2 function IDs Feb 9 09:47:00.945306 kernel: psci: Trusted OS migration not required Feb 9 09:47:00.945327 kernel: psci: SMC Calling Convention v1.1 Feb 9 09:47:00.945343 kernel: ACPI: SRAT not present Feb 9 09:47:00.945359 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 9 09:47:00.945378 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 9 09:47:00.945394 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 9 09:47:00.945409 kernel: Detected PIPT I-cache on CPU0 Feb 9 09:47:00.945424 kernel: CPU features: detected: GIC system register CPU interface Feb 9 09:47:00.945439 kernel: CPU features: detected: Spectre-v2 Feb 9 09:47:00.945454 kernel: CPU features: detected: Spectre-v3a Feb 9 09:47:00.945469 kernel: CPU features: detected: Spectre-BHB Feb 9 09:47:00.945483 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 9 09:47:00.945499 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 9 09:47:00.945514 kernel: CPU features: detected: ARM erratum 1742098 Feb 9 09:47:00.945529 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 9 09:47:00.945548 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 9 09:47:00.945563 kernel: Policy zone: Normal Feb 9 09:47:00.945581 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:47:00.945597 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 09:47:00.945612 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 09:47:00.945627 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 9 09:47:00.945643 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 09:47:00.945658 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 9 09:47:00.945674 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 9 09:47:00.945689 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 09:47:00.945708 kernel: trace event string verifier disabled Feb 9 09:47:00.945724 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 9 09:47:00.945758 kernel: rcu: RCU event tracing is enabled. Feb 9 09:47:00.945776 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 09:47:00.945792 kernel: Trampoline variant of Tasks RCU enabled. Feb 9 09:47:00.945808 kernel: Tracing variant of Tasks RCU enabled. Feb 9 09:47:00.945824 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 09:47:00.945839 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 09:47:00.945854 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 9 09:47:00.945869 kernel: GICv3: 96 SPIs implemented Feb 9 09:47:00.945884 kernel: GICv3: 0 Extended SPIs implemented Feb 9 09:47:00.945898 kernel: GICv3: Distributor has no Range Selector support Feb 9 09:47:00.945919 kernel: Root IRQ handler: gic_handle_irq Feb 9 09:47:00.945934 kernel: GICv3: 16 PPIs implemented Feb 9 09:47:00.945949 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 9 09:47:00.945964 kernel: ACPI: SRAT not present Feb 9 09:47:00.946000 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 9 09:47:00.946018 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 9 09:47:00.946033 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 9 09:47:00.946049 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 9 09:47:00.946064 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 9 09:47:00.946079 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 9 09:47:00.946094 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 9 09:47:00.946115 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 9 09:47:00.946131 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 9 09:47:00.946146 kernel: Console: colour dummy device 80x25 Feb 9 09:47:00.946162 kernel: printk: console [tty1] enabled Feb 9 09:47:00.946178 kernel: ACPI: Core revision 20210730 Feb 9 09:47:00.946194 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 9 09:47:00.946210 kernel: pid_max: default: 32768 minimum: 301 Feb 9 09:47:00.946225 kernel: LSM: Security Framework initializing Feb 9 09:47:00.946241 kernel: SELinux: Initializing. Feb 9 09:47:00.946256 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:47:00.946277 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 9 09:47:00.946293 kernel: rcu: Hierarchical SRCU implementation. Feb 9 09:47:00.946308 kernel: Platform MSI: ITS@0x10080000 domain created Feb 9 09:47:00.946323 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 9 09:47:00.946339 kernel: Remapping and enabling EFI services. Feb 9 09:47:00.946354 kernel: smp: Bringing up secondary CPUs ... Feb 9 09:47:00.946370 kernel: Detected PIPT I-cache on CPU1 Feb 9 09:47:00.946385 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 9 09:47:00.946401 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 9 09:47:00.946421 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 9 09:47:00.946436 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 09:47:00.946452 kernel: SMP: Total of 2 processors activated. Feb 9 09:47:00.946467 kernel: CPU features: detected: 32-bit EL0 Support Feb 9 09:47:00.946482 kernel: CPU features: detected: 32-bit EL1 Support Feb 9 09:47:00.946498 kernel: CPU features: detected: CRC32 instructions Feb 9 09:47:00.946513 kernel: CPU: All CPU(s) started at EL1 Feb 9 09:47:00.946528 kernel: alternatives: patching kernel code Feb 9 09:47:00.946544 kernel: devtmpfs: initialized Feb 9 09:47:00.946563 kernel: KASLR disabled due to lack of seed Feb 9 09:47:00.946579 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 09:47:00.946595 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 09:47:00.946620 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 09:47:00.946640 kernel: SMBIOS 3.0.0 present. Feb 9 09:47:00.946656 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 9 09:47:00.946672 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 09:47:00.946688 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 9 09:47:00.946704 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 9 09:47:00.946721 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 9 09:47:00.946737 kernel: audit: initializing netlink subsys (disabled) Feb 9 09:47:00.946753 kernel: audit: type=2000 audit(0.247:1): state=initialized audit_enabled=0 res=1 Feb 9 09:47:00.946773 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 09:47:00.946790 kernel: cpuidle: using governor menu Feb 9 09:47:00.946806 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 9 09:47:00.946822 kernel: ASID allocator initialised with 32768 entries Feb 9 09:47:00.946838 kernel: ACPI: bus type PCI registered Feb 9 09:47:00.946858 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 09:47:00.946874 kernel: Serial: AMBA PL011 UART driver Feb 9 09:47:00.946890 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 09:47:00.946907 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 9 09:47:00.946923 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 09:47:00.946939 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 9 09:47:00.946955 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 09:47:00.950010 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 9 09:47:00.950059 kernel: ACPI: Added _OSI(Module Device) Feb 9 09:47:00.950086 kernel: ACPI: Added _OSI(Processor Device) Feb 9 09:47:00.950103 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 09:47:00.950202 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 09:47:00.950492 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 09:47:00.950840 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 09:47:00.950999 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 09:47:00.951022 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 09:47:00.951039 kernel: ACPI: Interpreter enabled Feb 9 09:47:00.951055 kernel: ACPI: Using GIC for interrupt routing Feb 9 09:47:00.951077 kernel: ACPI: MCFG table detected, 1 entries Feb 9 09:47:00.951094 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 9 09:47:00.951395 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 9 09:47:00.951599 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 9 09:47:00.951796 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 9 09:47:00.952044 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 9 09:47:00.952253 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 9 09:47:00.952282 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 9 09:47:00.952299 kernel: acpiphp: Slot [1] registered Feb 9 09:47:00.952316 kernel: acpiphp: Slot [2] registered Feb 9 09:47:00.952332 kernel: acpiphp: Slot [3] registered Feb 9 09:47:00.952348 kernel: acpiphp: Slot [4] registered Feb 9 09:47:00.952364 kernel: acpiphp: Slot [5] registered Feb 9 09:47:00.952381 kernel: acpiphp: Slot [6] registered Feb 9 09:47:00.952397 kernel: acpiphp: Slot [7] registered Feb 9 09:47:00.952413 kernel: acpiphp: Slot [8] registered Feb 9 09:47:00.952434 kernel: acpiphp: Slot [9] registered Feb 9 09:47:00.952450 kernel: acpiphp: Slot [10] registered Feb 9 09:47:00.952466 kernel: acpiphp: Slot [11] registered Feb 9 09:47:00.952482 kernel: acpiphp: Slot [12] registered Feb 9 09:47:00.952498 kernel: acpiphp: Slot [13] registered Feb 9 09:47:00.952514 kernel: acpiphp: Slot [14] registered Feb 9 09:47:00.952530 kernel: acpiphp: Slot [15] registered Feb 9 09:47:00.952546 kernel: acpiphp: Slot [16] registered Feb 9 09:47:00.952562 kernel: acpiphp: Slot [17] registered Feb 9 09:47:00.952578 kernel: acpiphp: Slot [18] registered Feb 9 09:47:00.952599 kernel: acpiphp: Slot [19] registered Feb 9 09:47:00.952615 kernel: acpiphp: Slot [20] registered Feb 9 09:47:00.952631 kernel: acpiphp: Slot [21] registered Feb 9 09:47:00.952647 kernel: acpiphp: Slot [22] registered Feb 9 09:47:00.952663 kernel: acpiphp: Slot [23] registered Feb 9 09:47:00.952679 kernel: acpiphp: Slot [24] registered Feb 9 09:47:00.952695 kernel: acpiphp: Slot [25] registered Feb 9 09:47:00.952712 kernel: acpiphp: Slot [26] registered Feb 9 09:47:00.952728 kernel: acpiphp: Slot [27] registered Feb 9 09:47:00.952748 kernel: acpiphp: Slot [28] registered Feb 9 09:47:00.952764 kernel: acpiphp: Slot [29] registered Feb 9 09:47:00.952780 kernel: acpiphp: Slot [30] registered Feb 9 09:47:00.952796 kernel: acpiphp: Slot [31] registered Feb 9 09:47:00.952812 kernel: PCI host bridge to bus 0000:00 Feb 9 09:47:00.955124 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 9 09:47:00.955358 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 9 09:47:00.955554 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 9 09:47:00.955757 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 9 09:47:00.956062 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 9 09:47:00.956289 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 9 09:47:00.956498 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 9 09:47:00.956714 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 9 09:47:00.956917 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 9 09:47:00.967441 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:47:00.967687 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 9 09:47:00.967889 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 9 09:47:00.968115 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 9 09:47:00.968316 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 9 09:47:00.968515 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 9 09:47:00.968714 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 9 09:47:00.968922 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 9 09:47:00.969148 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 9 09:47:00.969353 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 9 09:47:00.969564 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 9 09:47:00.969781 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 9 09:47:00.969989 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 9 09:47:00.972219 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 9 09:47:00.972251 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 9 09:47:00.972269 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 9 09:47:00.972286 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 9 09:47:00.972303 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 9 09:47:00.972320 kernel: iommu: Default domain type: Translated Feb 9 09:47:00.972336 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 9 09:47:00.972352 kernel: vgaarb: loaded Feb 9 09:47:00.972368 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 09:47:00.972385 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 09:47:00.972405 kernel: PTP clock support registered Feb 9 09:47:00.972422 kernel: Registered efivars operations Feb 9 09:47:00.972438 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 9 09:47:00.972454 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 09:47:00.972471 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 09:47:00.972487 kernel: pnp: PnP ACPI init Feb 9 09:47:00.972708 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 9 09:47:00.972734 kernel: pnp: PnP ACPI: found 1 devices Feb 9 09:47:00.972751 kernel: NET: Registered PF_INET protocol family Feb 9 09:47:00.972773 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 09:47:00.972790 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 9 09:47:00.972807 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 09:47:00.972823 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 9 09:47:00.972840 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 9 09:47:00.972856 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 9 09:47:00.972873 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:47:00.972889 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 9 09:47:00.972906 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 09:47:00.972926 kernel: PCI: CLS 0 bytes, default 64 Feb 9 09:47:00.972943 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 9 09:47:00.972959 kernel: kvm [1]: HYP mode not available Feb 9 09:47:00.974010 kernel: Initialise system trusted keyrings Feb 9 09:47:00.974032 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 9 09:47:00.974050 kernel: Key type asymmetric registered Feb 9 09:47:00.974066 kernel: Asymmetric key parser 'x509' registered Feb 9 09:47:00.974083 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 09:47:00.974099 kernel: io scheduler mq-deadline registered Feb 9 09:47:00.974122 kernel: io scheduler kyber registered Feb 9 09:47:00.974139 kernel: io scheduler bfq registered Feb 9 09:47:00.974359 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 9 09:47:00.974384 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 9 09:47:00.974401 kernel: ACPI: button: Power Button [PWRB] Feb 9 09:47:00.974418 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 09:47:00.974435 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 9 09:47:00.974635 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 9 09:47:00.974663 kernel: printk: console [ttyS0] disabled Feb 9 09:47:00.974681 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 9 09:47:00.974698 kernel: printk: console [ttyS0] enabled Feb 9 09:47:00.974714 kernel: printk: bootconsole [uart0] disabled Feb 9 09:47:00.974730 kernel: thunder_xcv, ver 1.0 Feb 9 09:47:00.974746 kernel: thunder_bgx, ver 1.0 Feb 9 09:47:00.974762 kernel: nicpf, ver 1.0 Feb 9 09:47:00.974778 kernel: nicvf, ver 1.0 Feb 9 09:47:00.975012 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 9 09:47:00.975408 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-09T09:47:00 UTC (1707472020) Feb 9 09:47:00.975434 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 09:47:00.975451 kernel: NET: Registered PF_INET6 protocol family Feb 9 09:47:00.975467 kernel: Segment Routing with IPv6 Feb 9 09:47:00.975484 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 09:47:00.975500 kernel: NET: Registered PF_PACKET protocol family Feb 9 09:47:00.975516 kernel: Key type dns_resolver registered Feb 9 09:47:00.975532 kernel: registered taskstats version 1 Feb 9 09:47:00.975553 kernel: Loading compiled-in X.509 certificates Feb 9 09:47:00.975570 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: ca91574208414224935c9cea513398977daf917d' Feb 9 09:47:00.975586 kernel: Key type .fscrypt registered Feb 9 09:47:00.975602 kernel: Key type fscrypt-provisioning registered Feb 9 09:47:00.975618 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 09:47:00.975634 kernel: ima: Allocated hash algorithm: sha1 Feb 9 09:47:00.975650 kernel: ima: No architecture policies found Feb 9 09:47:00.975666 kernel: Freeing unused kernel memory: 34688K Feb 9 09:47:00.975682 kernel: Run /init as init process Feb 9 09:47:00.975702 kernel: with arguments: Feb 9 09:47:00.975719 kernel: /init Feb 9 09:47:00.975734 kernel: with environment: Feb 9 09:47:00.975750 kernel: HOME=/ Feb 9 09:47:00.975766 kernel: TERM=linux Feb 9 09:47:00.975782 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 09:47:00.975803 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:47:00.975824 systemd[1]: Detected virtualization amazon. Feb 9 09:47:00.975846 systemd[1]: Detected architecture arm64. Feb 9 09:47:00.975863 systemd[1]: Running in initrd. Feb 9 09:47:00.975880 systemd[1]: No hostname configured, using default hostname. Feb 9 09:47:00.975897 systemd[1]: Hostname set to . Feb 9 09:47:00.975915 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:47:00.975933 systemd[1]: Queued start job for default target initrd.target. Feb 9 09:47:00.975950 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:47:00.975967 systemd[1]: Reached target cryptsetup.target. Feb 9 09:47:00.976548 systemd[1]: Reached target paths.target. Feb 9 09:47:00.976644 systemd[1]: Reached target slices.target. Feb 9 09:47:00.976666 systemd[1]: Reached target swap.target. Feb 9 09:47:00.976684 systemd[1]: Reached target timers.target. Feb 9 09:47:00.976702 systemd[1]: Listening on iscsid.socket. Feb 9 09:47:00.976720 systemd[1]: Listening on iscsiuio.socket. Feb 9 09:47:00.976738 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 09:47:00.976755 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 09:47:00.976779 systemd[1]: Listening on systemd-journald.socket. Feb 9 09:47:00.976796 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:47:00.976814 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:47:00.976832 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:47:00.976850 systemd[1]: Reached target sockets.target. Feb 9 09:47:00.976867 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:47:00.976885 systemd[1]: Finished network-cleanup.service. Feb 9 09:47:00.976902 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 09:47:00.976920 systemd[1]: Starting systemd-journald.service... Feb 9 09:47:00.976942 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:47:00.976959 systemd[1]: Starting systemd-resolved.service... Feb 9 09:47:00.987172 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 09:47:00.987201 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:47:00.987221 kernel: audit: type=1130 audit(1707472020.958:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.987241 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 09:47:00.987260 kernel: audit: type=1130 audit(1707472020.978:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.987277 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 09:47:00.987308 systemd-journald[308]: Journal started Feb 9 09:47:00.987399 systemd-journald[308]: Runtime Journal (/run/log/journal/ec2a75664138492266bb530e2b94af23) is 8.0M, max 75.4M, 67.4M free. Feb 9 09:47:00.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:00.948602 systemd-modules-load[309]: Inserted module 'overlay' Feb 9 09:47:01.003066 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 09:47:01.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.013994 kernel: audit: type=1130 audit(1707472021.003:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.014036 systemd[1]: Started systemd-journald.service. Feb 9 09:47:01.015886 systemd-modules-load[309]: Inserted module 'br_netfilter' Feb 9 09:47:01.017786 kernel: Bridge firewalling registered Feb 9 09:47:01.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.033891 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 09:47:01.042490 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 09:47:01.048003 kernel: audit: type=1130 audit(1707472021.023:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.059704 systemd-resolved[310]: Positive Trust Anchors: Feb 9 09:47:01.059744 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:47:01.059801 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:47:01.089449 kernel: SCSI subsystem initialized Feb 9 09:47:01.088741 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 09:47:01.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.099999 kernel: audit: type=1130 audit(1707472021.088:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.118262 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 09:47:01.118344 kernel: device-mapper: uevent: version 1.0.3 Feb 9 09:47:01.118791 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 09:47:01.135245 kernel: audit: type=1130 audit(1707472021.118:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.135282 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 09:47:01.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.122001 systemd[1]: Starting dracut-cmdline.service... Feb 9 09:47:01.142571 systemd-modules-load[309]: Inserted module 'dm_multipath' Feb 9 09:47:01.145396 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:47:01.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.170621 kernel: audit: type=1130 audit(1707472021.147:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.172804 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:47:01.188885 dracut-cmdline[326]: dracut-dracut-053 Feb 9 09:47:01.196854 dracut-cmdline[326]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=14ffd9340f674a8d04c9d43eed85484d8b2b7e2bcd8b36a975c9ac66063d537d Feb 9 09:47:01.216130 kernel: audit: type=1130 audit(1707472021.196:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.196041 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:47:01.314999 kernel: Loading iSCSI transport class v2.0-870. Feb 9 09:47:01.327002 kernel: iscsi: registered transport (tcp) Feb 9 09:47:01.351798 kernel: iscsi: registered transport (qla4xxx) Feb 9 09:47:01.351867 kernel: QLogic iSCSI HBA Driver Feb 9 09:47:01.551897 systemd-resolved[310]: Defaulting to hostname 'linux'. Feb 9 09:47:01.553822 kernel: random: crng init done Feb 9 09:47:01.555478 systemd[1]: Started systemd-resolved.service. Feb 9 09:47:01.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.557317 systemd[1]: Reached target nss-lookup.target. Feb 9 09:47:01.569231 kernel: audit: type=1130 audit(1707472021.555:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.583609 systemd[1]: Finished dracut-cmdline.service. Feb 9 09:47:01.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:01.588133 systemd[1]: Starting dracut-pre-udev.service... Feb 9 09:47:01.654015 kernel: raid6: neonx8 gen() 6442 MB/s Feb 9 09:47:01.672001 kernel: raid6: neonx8 xor() 4543 MB/s Feb 9 09:47:01.690000 kernel: raid6: neonx4 gen() 6600 MB/s Feb 9 09:47:01.708001 kernel: raid6: neonx4 xor() 4655 MB/s Feb 9 09:47:01.726002 kernel: raid6: neonx2 gen() 5819 MB/s Feb 9 09:47:01.744002 kernel: raid6: neonx2 xor() 4391 MB/s Feb 9 09:47:01.762002 kernel: raid6: neonx1 gen() 4513 MB/s Feb 9 09:47:01.780003 kernel: raid6: neonx1 xor() 3566 MB/s Feb 9 09:47:01.798002 kernel: raid6: int64x8 gen() 3454 MB/s Feb 9 09:47:01.816001 kernel: raid6: int64x8 xor() 2048 MB/s Feb 9 09:47:01.834001 kernel: raid6: int64x4 gen() 3851 MB/s Feb 9 09:47:01.852006 kernel: raid6: int64x4 xor() 2165 MB/s Feb 9 09:47:01.870001 kernel: raid6: int64x2 gen() 3616 MB/s Feb 9 09:47:01.888001 kernel: raid6: int64x2 xor() 1924 MB/s Feb 9 09:47:01.906019 kernel: raid6: int64x1 gen() 2756 MB/s Feb 9 09:47:01.925490 kernel: raid6: int64x1 xor() 1437 MB/s Feb 9 09:47:01.925547 kernel: raid6: using algorithm neonx4 gen() 6600 MB/s Feb 9 09:47:01.925571 kernel: raid6: .... xor() 4655 MB/s, rmw enabled Feb 9 09:47:01.927259 kernel: raid6: using neon recovery algorithm Feb 9 09:47:01.946009 kernel: xor: measuring software checksum speed Feb 9 09:47:01.949002 kernel: 8regs : 9335 MB/sec Feb 9 09:47:01.949032 kernel: 32regs : 11108 MB/sec Feb 9 09:47:01.955032 kernel: arm64_neon : 9484 MB/sec Feb 9 09:47:01.955072 kernel: xor: using function: 32regs (11108 MB/sec) Feb 9 09:47:02.045028 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 9 09:47:02.062218 systemd[1]: Finished dracut-pre-udev.service. Feb 9 09:47:02.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.062000 audit: BPF prog-id=7 op=LOAD Feb 9 09:47:02.063000 audit: BPF prog-id=8 op=LOAD Feb 9 09:47:02.067435 systemd[1]: Starting systemd-udevd.service... Feb 9 09:47:02.097132 systemd-udevd[508]: Using default interface naming scheme 'v252'. Feb 9 09:47:02.107136 systemd[1]: Started systemd-udevd.service. Feb 9 09:47:02.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.120942 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 09:47:02.151122 dracut-pre-trigger[523]: rd.md=0: removing MD RAID activation Feb 9 09:47:02.209479 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 09:47:02.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.214047 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:47:02.316508 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:47:02.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:02.426933 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 9 09:47:02.427025 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 9 09:47:02.442490 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 9 09:47:02.442779 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 9 09:47:02.442806 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 9 09:47:02.450435 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 9 09:47:02.454010 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 9 09:47:02.457018 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 9 09:47:02.460148 kernel: GPT:9289727 != 16777215 Feb 9 09:47:02.460217 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 9 09:47:02.462325 kernel: GPT:9289727 != 16777215 Feb 9 09:47:02.462398 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 9 09:47:02.462427 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:47:02.467021 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:e9:97:42:4a:5d Feb 9 09:47:02.471658 (udev-worker)[561]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:02.533019 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (563) Feb 9 09:47:02.563631 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 09:47:02.619430 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:47:02.657921 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 09:47:02.687843 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 09:47:02.692840 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 09:47:02.698207 systemd[1]: Starting disk-uuid.service... Feb 9 09:47:02.709586 disk-uuid[672]: Primary Header is updated. Feb 9 09:47:02.709586 disk-uuid[672]: Secondary Entries is updated. Feb 9 09:47:02.709586 disk-uuid[672]: Secondary Header is updated. Feb 9 09:47:02.718017 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:47:02.728007 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:47:03.736027 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 9 09:47:03.737357 disk-uuid[673]: The operation has completed successfully. Feb 9 09:47:03.895881 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 09:47:03.896480 systemd[1]: Finished disk-uuid.service. Feb 9 09:47:03.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:03.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:03.930015 systemd[1]: Starting verity-setup.service... Feb 9 09:47:03.967900 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 9 09:47:04.030593 systemd[1]: Found device dev-mapper-usr.device. Feb 9 09:47:04.035698 systemd[1]: Mounting sysusr-usr.mount... Feb 9 09:47:04.042888 systemd[1]: Finished verity-setup.service. Feb 9 09:47:04.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:04.125024 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 09:47:04.125672 systemd[1]: Mounted sysusr-usr.mount. Feb 9 09:47:04.128367 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 09:47:04.130828 systemd[1]: Starting ignition-setup.service... Feb 9 09:47:04.139392 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 09:47:04.164584 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:47:04.164659 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:47:04.167004 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:47:04.174019 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:47:04.191453 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 09:47:04.223705 systemd[1]: Finished ignition-setup.service. Feb 9 09:47:04.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:04.228644 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 09:47:04.297296 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 09:47:04.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:04.300000 audit: BPF prog-id=9 op=LOAD Feb 9 09:47:04.302381 systemd[1]: Starting systemd-networkd.service... Feb 9 09:47:04.349557 systemd-networkd[1101]: lo: Link UP Feb 9 09:47:04.349580 systemd-networkd[1101]: lo: Gained carrier Feb 9 09:47:04.353292 systemd-networkd[1101]: Enumeration completed Feb 9 09:47:04.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:04.353779 systemd-networkd[1101]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:47:04.354031 systemd[1]: Started systemd-networkd.service. Feb 9 09:47:04.356568 systemd[1]: Reached target network.target. Feb 9 09:47:04.359352 systemd-networkd[1101]: eth0: Link UP Feb 9 09:47:04.359360 systemd-networkd[1101]: eth0: Gained carrier Feb 9 09:47:04.363397 systemd[1]: Starting iscsiuio.service... Feb 9 09:47:04.380192 systemd-networkd[1101]: eth0: DHCPv4 address 172.31.16.31/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 09:47:04.384987 systemd[1]: Started iscsiuio.service. Feb 9 09:47:04.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:04.387934 systemd[1]: Starting iscsid.service... Feb 9 09:47:04.397588 iscsid[1106]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:47:04.397588 iscsid[1106]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 09:47:04.397588 iscsid[1106]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 09:47:04.397588 iscsid[1106]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 09:47:04.397588 iscsid[1106]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 09:47:04.417391 iscsid[1106]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 09:47:04.410644 systemd[1]: Started iscsid.service. Feb 9 09:47:04.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:04.438138 systemd[1]: Starting dracut-initqueue.service... Feb 9 09:47:04.462671 systemd[1]: Finished dracut-initqueue.service. Feb 9 09:47:04.466026 systemd[1]: Reached target remote-fs-pre.target. Feb 9 09:47:04.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:04.466159 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:47:04.473026 systemd[1]: Reached target remote-fs.target. Feb 9 09:47:04.477556 systemd[1]: Starting dracut-pre-mount.service... Feb 9 09:47:04.506294 systemd[1]: Finished dracut-pre-mount.service. Feb 9 09:47:04.508000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:04.814772 ignition[1044]: Ignition 2.14.0 Feb 9 09:47:04.816526 ignition[1044]: Stage: fetch-offline Feb 9 09:47:04.818243 ignition[1044]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:47:04.820603 ignition[1044]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:47:04.835934 ignition[1044]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:47:04.838838 ignition[1044]: Ignition finished successfully Feb 9 09:47:04.841995 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 09:47:04.852120 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 09:47:04.852162 kernel: audit: type=1130 audit(1707472024.844:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:04.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:04.846909 systemd[1]: Starting ignition-fetch.service... Feb 9 09:47:04.865839 ignition[1125]: Ignition 2.14.0 Feb 9 09:47:04.867507 ignition[1125]: Stage: fetch Feb 9 09:47:04.869022 ignition[1125]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:47:04.871347 ignition[1125]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:47:04.882032 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:47:04.884618 ignition[1125]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:47:04.892558 ignition[1125]: INFO : PUT result: OK Feb 9 09:47:04.896697 ignition[1125]: DEBUG : parsed url from cmdline: "" Feb 9 09:47:04.898600 ignition[1125]: INFO : no config URL provided Feb 9 09:47:04.900234 ignition[1125]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 9 09:47:04.902596 ignition[1125]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 9 09:47:04.902596 ignition[1125]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:47:04.907314 ignition[1125]: INFO : PUT result: OK Feb 9 09:47:04.921964 ignition[1125]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 9 09:47:04.925107 ignition[1125]: INFO : GET result: OK Feb 9 09:47:04.926953 ignition[1125]: DEBUG : parsing config with SHA512: 3c6bae03b3139fb8dd07481c5b08b2b05744771c9832c6ec033d916a988fb544297fd6b4062ff567f217156aa8e61ebaaa7961ad9f1f68418be9847fea393bff Feb 9 09:47:04.970721 unknown[1125]: fetched base config from "system" Feb 9 09:47:04.970754 unknown[1125]: fetched base config from "system" Feb 9 09:47:04.970770 unknown[1125]: fetched user config from "aws" Feb 9 09:47:04.976517 ignition[1125]: fetch: fetch complete Feb 9 09:47:04.976545 ignition[1125]: fetch: fetch passed Feb 9 09:47:04.976650 ignition[1125]: Ignition finished successfully Feb 9 09:47:04.983333 systemd[1]: Finished ignition-fetch.service. Feb 9 09:47:04.994028 kernel: audit: type=1130 audit(1707472024.983:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:04.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:04.986487 systemd[1]: Starting ignition-kargs.service... Feb 9 09:47:05.009796 ignition[1131]: Ignition 2.14.0 Feb 9 09:47:05.010319 ignition[1131]: Stage: kargs Feb 9 09:47:05.010614 ignition[1131]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:47:05.010667 ignition[1131]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:47:05.026354 ignition[1131]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:47:05.029059 ignition[1131]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:47:05.031932 ignition[1131]: INFO : PUT result: OK Feb 9 09:47:05.037380 ignition[1131]: kargs: kargs passed Feb 9 09:47:05.037510 ignition[1131]: Ignition finished successfully Feb 9 09:47:05.041943 systemd[1]: Finished ignition-kargs.service. Feb 9 09:47:05.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:05.046801 systemd[1]: Starting ignition-disks.service... Feb 9 09:47:05.055652 kernel: audit: type=1130 audit(1707472025.044:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:05.064087 ignition[1137]: Ignition 2.14.0 Feb 9 09:47:05.064117 ignition[1137]: Stage: disks Feb 9 09:47:05.064444 ignition[1137]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:47:05.064508 ignition[1137]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:47:05.080781 ignition[1137]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:47:05.083345 ignition[1137]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:47:05.086245 ignition[1137]: INFO : PUT result: OK Feb 9 09:47:05.091939 ignition[1137]: disks: disks passed Feb 9 09:47:05.092113 ignition[1137]: Ignition finished successfully Feb 9 09:47:05.096588 systemd[1]: Finished ignition-disks.service. Feb 9 09:47:05.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:05.100044 systemd[1]: Reached target initrd-root-device.target. Feb 9 09:47:05.115904 kernel: audit: type=1130 audit(1707472025.098:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:05.108955 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:47:05.110734 systemd[1]: Reached target local-fs.target. Feb 9 09:47:05.112405 systemd[1]: Reached target sysinit.target. Feb 9 09:47:05.115733 systemd[1]: Reached target basic.target. Feb 9 09:47:05.131319 systemd[1]: Starting systemd-fsck-root.service... Feb 9 09:47:05.167393 systemd-fsck[1145]: ROOT: clean, 602/553520 files, 56013/553472 blocks Feb 9 09:47:05.175331 systemd[1]: Finished systemd-fsck-root.service. Feb 9 09:47:05.186624 kernel: audit: type=1130 audit(1707472025.176:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:05.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:05.178929 systemd[1]: Mounting sysroot.mount... Feb 9 09:47:05.202012 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 09:47:05.203756 systemd[1]: Mounted sysroot.mount. Feb 9 09:47:05.207638 systemd[1]: Reached target initrd-root-fs.target. Feb 9 09:47:05.228012 systemd[1]: Mounting sysroot-usr.mount... Feb 9 09:47:05.230731 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 9 09:47:05.230815 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 09:47:05.230874 systemd[1]: Reached target ignition-diskful.target. Feb 9 09:47:05.238340 systemd[1]: Mounted sysroot-usr.mount. Feb 9 09:47:05.256184 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:47:05.264566 systemd[1]: Starting initrd-setup-root.service... Feb 9 09:47:05.276307 initrd-setup-root[1167]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 09:47:05.285017 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1162) Feb 9 09:47:05.294423 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:47:05.294488 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:47:05.296679 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:47:05.304006 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:47:05.307235 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:47:05.313947 initrd-setup-root[1193]: cut: /sysroot/etc/group: No such file or directory Feb 9 09:47:05.323118 initrd-setup-root[1201]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 09:47:05.331369 initrd-setup-root[1209]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 09:47:05.527887 systemd[1]: Finished initrd-setup-root.service. Feb 9 09:47:05.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:05.532770 systemd[1]: Starting ignition-mount.service... Feb 9 09:47:05.542000 kernel: audit: type=1130 audit(1707472025.530:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:05.542688 systemd[1]: Starting sysroot-boot.service... Feb 9 09:47:05.558085 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 09:47:05.558261 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 09:47:05.574257 systemd-networkd[1101]: eth0: Gained IPv6LL Feb 9 09:47:05.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:05.588057 systemd[1]: Finished sysroot-boot.service. Feb 9 09:47:05.598268 kernel: audit: type=1130 audit(1707472025.588:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:05.598526 ignition[1230]: INFO : Ignition 2.14.0 Feb 9 09:47:05.600389 ignition[1230]: INFO : Stage: mount Feb 9 09:47:05.602142 ignition[1230]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:47:05.604713 ignition[1230]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:47:05.620153 ignition[1230]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:47:05.622965 ignition[1230]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:47:05.625848 ignition[1230]: INFO : PUT result: OK Feb 9 09:47:05.631233 ignition[1230]: INFO : mount: mount passed Feb 9 09:47:05.632902 ignition[1230]: INFO : Ignition finished successfully Feb 9 09:47:05.636024 systemd[1]: Finished ignition-mount.service. Feb 9 09:47:05.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:05.639444 systemd[1]: Starting ignition-files.service... Feb 9 09:47:05.649311 kernel: audit: type=1130 audit(1707472025.634:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:05.655425 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 09:47:05.673022 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1237) Feb 9 09:47:05.679143 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 9 09:47:05.679189 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 9 09:47:05.679212 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 9 09:47:05.688004 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 9 09:47:05.692589 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 09:47:05.712043 ignition[1256]: INFO : Ignition 2.14.0 Feb 9 09:47:05.713884 ignition[1256]: INFO : Stage: files Feb 9 09:47:05.715474 ignition[1256]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:47:05.717892 ignition[1256]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:47:05.732944 ignition[1256]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:47:05.735787 ignition[1256]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:47:05.738571 ignition[1256]: INFO : PUT result: OK Feb 9 09:47:05.745370 ignition[1256]: DEBUG : files: compiled without relabeling support, skipping Feb 9 09:47:05.750267 ignition[1256]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 09:47:05.753029 ignition[1256]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 09:47:05.798910 ignition[1256]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 09:47:05.810092 ignition[1256]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 09:47:05.819117 unknown[1256]: wrote ssh authorized keys file for user: core Feb 9 09:47:05.821313 ignition[1256]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 09:47:05.826495 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 09:47:05.830687 ignition[1256]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-arm64-v1.3.0.tgz: attempt #1 Feb 9 09:47:06.167703 ignition[1256]: INFO : GET result: OK Feb 9 09:47:06.689898 ignition[1256]: DEBUG : file matches expected sum of: b2b7fb74f1b3cb8928f49e5bf9d4bc686e057e837fac3caf1b366d54757921dba80d70cc010399b274d136e8dee9a25b1ad87cdfdc4ffcf42cf88f3e8f99587a Feb 9 09:47:06.694860 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.3.0.tgz" Feb 9 09:47:06.694860 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 09:47:06.694860 ignition[1256]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-arm64.tar.gz: attempt #1 Feb 9 09:47:06.962565 ignition[1256]: INFO : GET result: OK Feb 9 09:47:07.245301 ignition[1256]: DEBUG : file matches expected sum of: db062e43351a63347871e7094115be2ae3853afcd346d47f7b51141da8c3202c2df58d2e17359322f632abcb37474fd7fdb3b7aadbc5cfd5cf6d3bad040b6251 Feb 9 09:47:07.249950 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-arm64.tar.gz" Feb 9 09:47:07.249950 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 09:47:07.257228 ignition[1256]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:47:07.270348 ignition[1256]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2854666294" Feb 9 09:47:07.273321 ignition[1256]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2854666294": device or resource busy Feb 9 09:47:07.280390 ignition[1256]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2854666294", trying btrfs: device or resource busy Feb 9 09:47:07.280390 ignition[1256]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2854666294" Feb 9 09:47:07.286724 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1259) Feb 9 09:47:07.286761 ignition[1256]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2854666294" Feb 9 09:47:07.300204 ignition[1256]: INFO : op(3): [started] unmounting "/mnt/oem2854666294" Feb 9 09:47:07.302599 ignition[1256]: INFO : op(3): [finished] unmounting "/mnt/oem2854666294" Feb 9 09:47:07.304901 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 9 09:47:07.308231 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:47:07.308231 ignition[1256]: INFO : GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubeadm: attempt #1 Feb 9 09:47:07.316464 systemd[1]: mnt-oem2854666294.mount: Deactivated successfully. Feb 9 09:47:07.417615 ignition[1256]: INFO : GET result: OK Feb 9 09:47:08.070633 ignition[1256]: DEBUG : file matches expected sum of: 5a08b81f9cc82d3cce21130856ca63b8dafca9149d9775dd25b376eb0f18209aa0e4a47c0a6d7e6fb1316aacd5d59dec770f26c09120c866949d70bc415518b3 Feb 9 09:47:08.075602 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 09:47:08.075602 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:47:08.075602 ignition[1256]: INFO : GET https://dl.k8s.io/release/v1.28.1/bin/linux/arm64/kubelet: attempt #1 Feb 9 09:47:08.137493 ignition[1256]: INFO : GET result: OK Feb 9 09:47:09.470008 ignition[1256]: DEBUG : file matches expected sum of: 5a898ef543a6482895101ea58e33602e3c0a7682d322aaf08ac3dc8a5a3c8da8f09600d577024549288f8cebb1a86f9c79927796b69a3d8fe989ca8f12b147d6 Feb 9 09:47:09.474998 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 09:47:09.474998 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 9 09:47:09.474998 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 09:47:09.474998 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:47:09.488588 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 09:47:09.497854 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:47:09.501400 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 09:47:09.504857 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 09:47:09.508810 ignition[1256]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:47:09.522122 ignition[1256]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem628599329" Feb 9 09:47:09.527195 ignition[1256]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem628599329": device or resource busy Feb 9 09:47:09.527195 ignition[1256]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem628599329", trying btrfs: device or resource busy Feb 9 09:47:09.527195 ignition[1256]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem628599329" Feb 9 09:47:09.527195 ignition[1256]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem628599329" Feb 9 09:47:09.527195 ignition[1256]: INFO : op(6): [started] unmounting "/mnt/oem628599329" Feb 9 09:47:09.527195 ignition[1256]: INFO : op(6): [finished] unmounting "/mnt/oem628599329" Feb 9 09:47:09.527195 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 9 09:47:09.527195 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 09:47:09.527195 ignition[1256]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:47:09.535467 systemd[1]: mnt-oem628599329.mount: Deactivated successfully. Feb 9 09:47:09.573473 ignition[1256]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2888436536" Feb 9 09:47:09.578069 ignition[1256]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2888436536": device or resource busy Feb 9 09:47:09.578069 ignition[1256]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2888436536", trying btrfs: device or resource busy Feb 9 09:47:09.578069 ignition[1256]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2888436536" Feb 9 09:47:09.578069 ignition[1256]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2888436536" Feb 9 09:47:09.578069 ignition[1256]: INFO : op(9): [started] unmounting "/mnt/oem2888436536" Feb 9 09:47:09.578069 ignition[1256]: INFO : op(9): [finished] unmounting "/mnt/oem2888436536" Feb 9 09:47:09.578069 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 9 09:47:09.578069 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:47:09.578069 ignition[1256]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 9 09:47:09.592739 systemd[1]: mnt-oem2888436536.mount: Deactivated successfully. Feb 9 09:47:09.624415 ignition[1256]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3065438993" Feb 9 09:47:09.627305 ignition[1256]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3065438993": device or resource busy Feb 9 09:47:09.627305 ignition[1256]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3065438993", trying btrfs: device or resource busy Feb 9 09:47:09.627305 ignition[1256]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3065438993" Feb 9 09:47:09.640145 ignition[1256]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3065438993" Feb 9 09:47:09.640145 ignition[1256]: INFO : op(c): [started] unmounting "/mnt/oem3065438993" Feb 9 09:47:09.640145 ignition[1256]: INFO : op(c): [finished] unmounting "/mnt/oem3065438993" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(e): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(e): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(f): [started] processing unit "amazon-ssm-agent.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(f): op(10): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(f): op(10): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(f): [finished] processing unit "amazon-ssm-agent.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(11): [started] processing unit "nvidia.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(11): [finished] processing unit "nvidia.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(12): [started] processing unit "prepare-cni-plugins.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(12): op(13): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(12): op(13): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(12): [finished] processing unit "prepare-cni-plugins.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 09:47:09.640145 ignition[1256]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Feb 9 09:47:09.709891 ignition[1256]: INFO : files: op(16): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:47:09.709891 ignition[1256]: INFO : files: op(16): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 09:47:09.709891 ignition[1256]: INFO : files: op(17): [started] setting preset to enabled for "prepare-critools.service" Feb 9 09:47:09.709891 ignition[1256]: INFO : files: op(17): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 09:47:09.709891 ignition[1256]: INFO : files: op(18): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:47:09.709891 ignition[1256]: INFO : files: op(18): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 9 09:47:09.709891 ignition[1256]: INFO : files: op(19): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 09:47:09.709891 ignition[1256]: INFO : files: op(19): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 9 09:47:09.709891 ignition[1256]: INFO : files: op(1a): [started] setting preset to enabled for "nvidia.service" Feb 9 09:47:09.709891 ignition[1256]: INFO : files: op(1a): [finished] setting preset to enabled for "nvidia.service" Feb 9 09:47:09.709891 ignition[1256]: INFO : files: createResultFile: createFiles: op(1b): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:47:09.709891 ignition[1256]: INFO : files: createResultFile: createFiles: op(1b): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 09:47:09.709891 ignition[1256]: INFO : files: files passed Feb 9 09:47:09.709891 ignition[1256]: INFO : Ignition finished successfully Feb 9 09:47:09.760430 systemd[1]: Finished ignition-files.service. Feb 9 09:47:09.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.765628 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 09:47:09.774018 kernel: audit: type=1130 audit(1707472029.762:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.776146 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 09:47:09.777561 systemd[1]: Starting ignition-quench.service... Feb 9 09:47:09.786002 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 09:47:09.786211 systemd[1]: Finished ignition-quench.service. Feb 9 09:47:09.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.795181 initrd-setup-root-after-ignition[1281]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 09:47:09.802337 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 09:47:09.806060 kernel: audit: type=1130 audit(1707472029.786:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.806447 systemd[1]: Reached target ignition-complete.target. Feb 9 09:47:09.811037 systemd[1]: Starting initrd-parse-etc.service... Feb 9 09:47:09.839958 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 09:47:09.841950 systemd[1]: Finished initrd-parse-etc.service. Feb 9 09:47:09.852042 kernel: kauditd_printk_skb: 2 callbacks suppressed Feb 9 09:47:09.852081 kernel: audit: type=1130 audit(1707472029.848:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.849916 systemd[1]: Reached target initrd-fs.target. Feb 9 09:47:09.867262 kernel: audit: type=1131 audit(1707472029.848:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.868995 systemd[1]: Reached target initrd.target. Feb 9 09:47:09.872055 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 09:47:09.876026 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 09:47:09.897170 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 09:47:09.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.898757 systemd[1]: Starting initrd-cleanup.service... Feb 9 09:47:09.917608 kernel: audit: type=1130 audit(1707472029.896:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.924719 systemd[1]: Stopped target nss-lookup.target. Feb 9 09:47:10.027665 kernel: audit: type=1131 audit(1707472029.923:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.027712 kernel: audit: type=1131 audit(1707472029.945:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.027746 kernel: audit: type=1131 audit(1707472029.945:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.027771 kernel: audit: type=1131 audit(1707472029.962:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.027796 kernel: audit: type=1131 audit(1707472029.962:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.962000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.925061 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 09:47:09.925838 systemd[1]: Stopped target timers.target. Feb 9 09:47:09.926539 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 09:47:09.926747 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 09:47:09.935812 systemd[1]: Stopped target initrd.target. Feb 9 09:47:09.938072 systemd[1]: Stopped target basic.target. Feb 9 09:47:09.938779 systemd[1]: Stopped target ignition-complete.target. Feb 9 09:47:09.939513 systemd[1]: Stopped target ignition-diskful.target. Feb 9 09:47:09.940258 systemd[1]: Stopped target initrd-root-device.target. Feb 9 09:47:09.940997 systemd[1]: Stopped target remote-fs.target. Feb 9 09:47:09.941696 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 09:47:10.049606 iscsid[1106]: iscsid shutting down. Feb 9 09:47:10.071091 kernel: audit: type=1131 audit(1707472030.059:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.942456 systemd[1]: Stopped target sysinit.target. Feb 9 09:47:10.072713 ignition[1294]: INFO : Ignition 2.14.0 Feb 9 09:47:10.072713 ignition[1294]: INFO : Stage: umount Feb 9 09:47:10.072713 ignition[1294]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 09:47:10.072713 ignition[1294]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 9 09:47:09.943198 systemd[1]: Stopped target local-fs.target. Feb 9 09:47:09.943902 systemd[1]: Stopped target local-fs-pre.target. Feb 9 09:47:09.944645 systemd[1]: Stopped target swap.target. Feb 9 09:47:09.945277 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 09:47:09.945556 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 09:47:10.112726 kernel: audit: type=1131 audit(1707472030.099:50): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.112831 ignition[1294]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 9 09:47:10.112831 ignition[1294]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 9 09:47:10.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:09.953774 systemd[1]: Stopped target cryptsetup.target. Feb 9 09:47:09.954476 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 09:47:09.954746 systemd[1]: Stopped dracut-initqueue.service. Feb 9 09:47:09.955408 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 09:47:09.962890 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 09:47:10.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.137534 ignition[1294]: INFO : PUT result: OK Feb 9 09:47:09.963811 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 09:47:09.964088 systemd[1]: Stopped ignition-files.service. Feb 9 09:47:10.024397 systemd[1]: Stopping ignition-mount.service... Feb 9 09:47:10.044631 systemd[1]: Stopping iscsid.service... Feb 9 09:47:10.046751 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 09:47:10.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.048148 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 09:47:10.069684 systemd[1]: Stopping sysroot-boot.service... Feb 9 09:47:10.074493 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 09:47:10.077524 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 09:47:10.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.169497 ignition[1294]: INFO : umount: umount passed Feb 9 09:47:10.169497 ignition[1294]: INFO : Ignition finished successfully Feb 9 09:47:10.108182 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 09:47:10.179000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.108558 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 09:47:10.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.122012 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 09:47:10.122316 systemd[1]: Stopped iscsid.service. Feb 9 09:47:10.130115 systemd[1]: Stopping iscsiuio.service... Feb 9 09:47:10.137166 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 09:47:10.139391 systemd[1]: Finished initrd-cleanup.service. Feb 9 09:47:10.195000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.156609 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 09:47:10.156870 systemd[1]: Stopped iscsiuio.service. Feb 9 09:47:10.165283 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 09:47:10.165476 systemd[1]: Stopped ignition-mount.service. Feb 9 09:47:10.167341 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 09:47:10.167436 systemd[1]: Stopped ignition-disks.service. Feb 9 09:47:10.176323 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 09:47:10.176428 systemd[1]: Stopped ignition-kargs.service. Feb 9 09:47:10.182574 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 09:47:10.182664 systemd[1]: Stopped ignition-fetch.service. Feb 9 09:47:10.190280 systemd[1]: Stopped target network.target. Feb 9 09:47:10.191867 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 09:47:10.193673 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 09:47:10.196799 systemd[1]: Stopped target paths.target. Feb 9 09:47:10.198336 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 09:47:10.211259 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 09:47:10.217572 systemd[1]: Stopped target slices.target. Feb 9 09:47:10.219108 systemd[1]: Stopped target sockets.target. Feb 9 09:47:10.222499 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 09:47:10.224083 systemd[1]: Closed iscsid.socket. Feb 9 09:47:10.233330 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 09:47:10.233410 systemd[1]: Closed iscsiuio.socket. Feb 9 09:47:10.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.236331 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 09:47:10.237590 systemd[1]: Stopped ignition-setup.service. Feb 9 09:47:10.245641 systemd[1]: Stopping systemd-networkd.service... Feb 9 09:47:10.248656 systemd[1]: Stopping systemd-resolved.service... Feb 9 09:47:10.252127 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 09:47:10.254004 systemd[1]: Stopped sysroot-boot.service. Feb 9 09:47:10.254000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.257326 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 09:47:10.257427 systemd[1]: Stopped initrd-setup-root.service. Feb 9 09:47:10.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.263081 systemd-networkd[1101]: eth0: DHCPv6 lease lost Feb 9 09:47:10.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.264708 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 09:47:10.264903 systemd[1]: Stopped systemd-resolved.service. Feb 9 09:47:10.272577 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 09:47:10.274582 systemd[1]: Stopped systemd-networkd.service. Feb 9 09:47:10.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.277830 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 09:47:10.277920 systemd[1]: Closed systemd-networkd.socket. Feb 9 09:47:10.280000 audit: BPF prog-id=6 op=UNLOAD Feb 9 09:47:10.280000 audit: BPF prog-id=9 op=UNLOAD Feb 9 09:47:10.284133 systemd[1]: Stopping network-cleanup.service... Feb 9 09:47:10.288000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.290000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.292000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.287734 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 09:47:10.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.287854 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 09:47:10.289766 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:47:10.289846 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:47:10.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.314000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.291666 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 09:47:10.291749 systemd[1]: Stopped systemd-modules-load.service. Feb 9 09:47:10.293673 systemd[1]: Stopping systemd-udevd.service... Feb 9 09:47:10.300818 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 09:47:10.301153 systemd[1]: Stopped systemd-udevd.service. Feb 9 09:47:10.307111 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 09:47:10.307197 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 09:47:10.310864 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 09:47:10.310938 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 09:47:10.312652 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 09:47:10.312737 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 09:47:10.314412 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 09:47:10.314492 systemd[1]: Stopped dracut-cmdline.service. Feb 9 09:47:10.316128 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 09:47:10.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.351000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.316204 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 09:47:10.319326 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 09:47:10.327296 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 09:47:10.327409 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 09:47:10.329942 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 09:47:10.330257 systemd[1]: Stopped network-cleanup.service. Feb 9 09:47:10.375586 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 09:47:10.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:10.375781 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 09:47:10.378594 systemd[1]: Reached target initrd-switch-root.target. Feb 9 09:47:10.382686 systemd[1]: Starting initrd-switch-root.service... Feb 9 09:47:10.401162 systemd[1]: Switching root. Feb 9 09:47:10.423166 systemd-journald[308]: Journal stopped Feb 9 09:47:15.637290 systemd-journald[308]: Received SIGTERM from PID 1 (systemd). Feb 9 09:47:15.642351 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 09:47:15.642412 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 09:47:15.642445 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 09:47:15.642476 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 09:47:15.642507 kernel: SELinux: policy capability open_perms=1 Feb 9 09:47:15.642538 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 09:47:15.642567 kernel: SELinux: policy capability always_check_network=0 Feb 9 09:47:15.642594 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 09:47:15.642625 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 09:47:15.642654 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 09:47:15.642687 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 09:47:15.642718 systemd[1]: Successfully loaded SELinux policy in 85.420ms. Feb 9 09:47:15.642763 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.666ms. Feb 9 09:47:15.642806 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 09:47:15.642846 systemd[1]: Detected virtualization amazon. Feb 9 09:47:15.642879 systemd[1]: Detected architecture arm64. Feb 9 09:47:15.642912 systemd[1]: Detected first boot. Feb 9 09:47:15.642941 systemd[1]: Initializing machine ID from VM UUID. Feb 9 09:47:15.643228 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 09:47:15.643263 systemd[1]: Populated /etc with preset unit settings. Feb 9 09:47:15.643297 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:47:15.643333 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:47:15.643365 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:47:15.643403 kernel: kauditd_printk_skb: 44 callbacks suppressed Feb 9 09:47:15.643434 kernel: audit: type=1334 audit(1707472035.218:88): prog-id=12 op=LOAD Feb 9 09:47:15.643468 kernel: audit: type=1334 audit(1707472035.218:89): prog-id=3 op=UNLOAD Feb 9 09:47:15.643498 kernel: audit: type=1334 audit(1707472035.220:90): prog-id=13 op=LOAD Feb 9 09:47:15.643536 kernel: audit: type=1334 audit(1707472035.222:91): prog-id=14 op=LOAD Feb 9 09:47:15.643563 kernel: audit: type=1334 audit(1707472035.222:92): prog-id=4 op=UNLOAD Feb 9 09:47:15.643592 kernel: audit: type=1334 audit(1707472035.222:93): prog-id=5 op=UNLOAD Feb 9 09:47:15.643619 kernel: audit: type=1334 audit(1707472035.225:94): prog-id=15 op=LOAD Feb 9 09:47:15.643648 kernel: audit: type=1334 audit(1707472035.225:95): prog-id=12 op=UNLOAD Feb 9 09:47:15.643680 kernel: audit: type=1334 audit(1707472035.227:96): prog-id=16 op=LOAD Feb 9 09:47:15.643708 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 09:47:15.643741 kernel: audit: type=1334 audit(1707472035.230:97): prog-id=17 op=LOAD Feb 9 09:47:15.643771 systemd[1]: Stopped initrd-switch-root.service. Feb 9 09:47:15.643800 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 09:47:15.643831 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 09:47:15.643863 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 09:47:15.643897 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 9 09:47:15.643929 systemd[1]: Created slice system-getty.slice. Feb 9 09:47:15.643964 systemd[1]: Created slice system-modprobe.slice. Feb 9 09:47:15.644016 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 09:47:15.644050 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 09:47:15.644081 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 09:47:15.644112 systemd[1]: Created slice user.slice. Feb 9 09:47:15.644143 systemd[1]: Started systemd-ask-password-console.path. Feb 9 09:47:15.644172 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 09:47:15.644201 systemd[1]: Set up automount boot.automount. Feb 9 09:47:15.644233 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 09:47:15.644269 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 09:47:15.644298 systemd[1]: Stopped target initrd-fs.target. Feb 9 09:47:15.644329 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 09:47:15.644358 systemd[1]: Reached target integritysetup.target. Feb 9 09:47:15.644387 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 09:47:15.644419 systemd[1]: Reached target remote-fs.target. Feb 9 09:47:15.644449 systemd[1]: Reached target slices.target. Feb 9 09:47:15.644479 systemd[1]: Reached target swap.target. Feb 9 09:47:15.644508 systemd[1]: Reached target torcx.target. Feb 9 09:47:15.644539 systemd[1]: Reached target veritysetup.target. Feb 9 09:47:15.644577 systemd[1]: Listening on systemd-coredump.socket. Feb 9 09:47:15.644607 systemd[1]: Listening on systemd-initctl.socket. Feb 9 09:47:15.644645 systemd[1]: Listening on systemd-networkd.socket. Feb 9 09:47:15.644676 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 09:47:15.644705 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 09:47:15.644734 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 09:47:15.644763 systemd[1]: Mounting dev-hugepages.mount... Feb 9 09:47:15.644793 systemd[1]: Mounting dev-mqueue.mount... Feb 9 09:47:15.644824 systemd[1]: Mounting media.mount... Feb 9 09:47:15.644857 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 09:47:15.644886 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 09:47:15.644917 systemd[1]: Mounting tmp.mount... Feb 9 09:47:15.644946 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 09:47:15.646056 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 09:47:15.646107 systemd[1]: Starting kmod-static-nodes.service... Feb 9 09:47:15.646137 systemd[1]: Starting modprobe@configfs.service... Feb 9 09:47:15.646167 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 09:47:15.646199 systemd[1]: Starting modprobe@drm.service... Feb 9 09:47:15.646252 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 09:47:15.646282 systemd[1]: Starting modprobe@fuse.service... Feb 9 09:47:15.646314 systemd[1]: Starting modprobe@loop.service... Feb 9 09:47:15.646348 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 09:47:15.646382 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 09:47:15.646411 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 09:47:15.646442 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 09:47:15.646474 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 09:47:15.646503 systemd[1]: Stopped systemd-journald.service. Feb 9 09:47:15.646535 kernel: loop: module loaded Feb 9 09:47:15.646564 systemd[1]: Starting systemd-journald.service... Feb 9 09:47:15.646592 kernel: fuse: init (API version 7.34) Feb 9 09:47:15.646623 systemd[1]: Starting systemd-modules-load.service... Feb 9 09:47:15.646653 systemd[1]: Starting systemd-network-generator.service... Feb 9 09:47:15.646683 systemd[1]: Starting systemd-remount-fs.service... Feb 9 09:47:15.646712 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 09:47:15.646741 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 09:47:15.646770 systemd[1]: Stopped verity-setup.service. Feb 9 09:47:15.646802 systemd[1]: Mounted dev-hugepages.mount. Feb 9 09:47:15.646833 systemd[1]: Mounted dev-mqueue.mount. Feb 9 09:47:15.646862 systemd[1]: Mounted media.mount. Feb 9 09:47:15.646895 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 09:47:15.646924 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 09:47:15.646953 systemd[1]: Mounted tmp.mount. Feb 9 09:47:15.647030 systemd[1]: Finished kmod-static-nodes.service. Feb 9 09:47:15.647065 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 09:47:15.647095 systemd[1]: Finished modprobe@configfs.service. Feb 9 09:47:15.647129 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 09:47:15.647159 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 09:47:15.647188 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 09:47:15.647218 systemd[1]: Finished modprobe@drm.service. Feb 9 09:47:15.647247 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 09:47:15.647281 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 09:47:15.647311 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 09:47:15.647341 systemd[1]: Finished modprobe@fuse.service. Feb 9 09:47:15.647373 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 09:47:15.647406 systemd-journald[1408]: Journal started Feb 9 09:47:15.647506 systemd-journald[1408]: Runtime Journal (/run/log/journal/ec2a75664138492266bb530e2b94af23) is 8.0M, max 75.4M, 67.4M free. Feb 9 09:47:11.075000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 09:47:11.252000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:47:11.252000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 09:47:11.252000 audit: BPF prog-id=10 op=LOAD Feb 9 09:47:11.252000 audit: BPF prog-id=10 op=UNLOAD Feb 9 09:47:11.252000 audit: BPF prog-id=11 op=LOAD Feb 9 09:47:11.252000 audit: BPF prog-id=11 op=UNLOAD Feb 9 09:47:11.445000 audit[1329]: AVC avc: denied { associate } for pid=1329 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 09:47:11.445000 audit[1329]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001458b2 a1=40000c6de0 a2=40000cd0c0 a3=32 items=0 ppid=1312 pid=1329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:11.445000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:47:11.449000 audit[1329]: AVC avc: denied { associate } for pid=1329 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 09:47:11.449000 audit[1329]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=4000145989 a2=1ed a3=0 items=2 ppid=1312 pid=1329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:11.449000 audit: CWD cwd="/" Feb 9 09:47:11.449000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:47:11.449000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 09:47:11.449000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 09:47:15.218000 audit: BPF prog-id=12 op=LOAD Feb 9 09:47:15.218000 audit: BPF prog-id=3 op=UNLOAD Feb 9 09:47:15.220000 audit: BPF prog-id=13 op=LOAD Feb 9 09:47:15.222000 audit: BPF prog-id=14 op=LOAD Feb 9 09:47:15.222000 audit: BPF prog-id=4 op=UNLOAD Feb 9 09:47:15.222000 audit: BPF prog-id=5 op=UNLOAD Feb 9 09:47:15.225000 audit: BPF prog-id=15 op=LOAD Feb 9 09:47:15.225000 audit: BPF prog-id=12 op=UNLOAD Feb 9 09:47:15.227000 audit: BPF prog-id=16 op=LOAD Feb 9 09:47:15.230000 audit: BPF prog-id=17 op=LOAD Feb 9 09:47:15.230000 audit: BPF prog-id=13 op=UNLOAD Feb 9 09:47:15.230000 audit: BPF prog-id=14 op=UNLOAD Feb 9 09:47:15.232000 audit: BPF prog-id=18 op=LOAD Feb 9 09:47:15.232000 audit: BPF prog-id=15 op=UNLOAD Feb 9 09:47:15.235000 audit: BPF prog-id=19 op=LOAD Feb 9 09:47:15.237000 audit: BPF prog-id=20 op=LOAD Feb 9 09:47:15.237000 audit: BPF prog-id=16 op=UNLOAD Feb 9 09:47:15.237000 audit: BPF prog-id=17 op=UNLOAD Feb 9 09:47:15.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.252000 audit: BPF prog-id=18 op=UNLOAD Feb 9 09:47:15.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.505000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.507000 audit: BPF prog-id=21 op=LOAD Feb 9 09:47:15.509000 audit: BPF prog-id=22 op=LOAD Feb 9 09:47:15.509000 audit: BPF prog-id=23 op=LOAD Feb 9 09:47:15.509000 audit: BPF prog-id=19 op=UNLOAD Feb 9 09:47:15.509000 audit: BPF prog-id=20 op=UNLOAD Feb 9 09:47:15.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.633000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 09:47:15.633000 audit[1408]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffdbcf1d40 a2=4000 a3=1 items=0 ppid=1 pid=1408 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:15.633000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 09:47:15.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.660935 systemd[1]: Finished modprobe@loop.service. Feb 9 09:47:15.661029 systemd[1]: Started systemd-journald.service. Feb 9 09:47:15.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.216834 systemd[1]: Queued start job for default target multi-user.target. Feb 9 09:47:15.662000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:11.432499 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:47:15.240242 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 09:47:15.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:11.443927 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:47:15.661456 systemd[1]: Finished systemd-modules-load.service. Feb 9 09:47:11.444026 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:47:15.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.663805 systemd[1]: Finished systemd-network-generator.service. Feb 9 09:47:11.444105 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 09:47:15.666329 systemd[1]: Finished systemd-remount-fs.service. Feb 9 09:47:11.444133 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 09:47:15.668827 systemd[1]: Reached target network-pre.target. Feb 9 09:47:11.444220 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 09:47:11.444254 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 09:47:11.444698 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 09:47:11.444791 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 09:47:11.444830 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 09:47:11.445737 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 09:47:11.445834 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 09:47:11.445883 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 09:47:11.445928 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 09:47:11.446030 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 09:47:11.446075 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:11Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 09:47:14.386434 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:14Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:47:14.386997 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:14Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:47:14.387247 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:14Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:47:14.387692 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:14Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 09:47:14.387802 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:14Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 09:47:14.387943 /usr/lib/systemd/system-generators/torcx-generator[1329]: time="2024-02-09T09:47:14Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 09:47:15.676610 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 09:47:15.680785 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 09:47:15.687363 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 09:47:15.691356 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 09:47:15.695880 systemd[1]: Starting systemd-journal-flush.service... Feb 9 09:47:15.698196 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 09:47:15.700369 systemd[1]: Starting systemd-random-seed.service... Feb 9 09:47:15.702193 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 09:47:15.704571 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:47:15.712709 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 09:47:15.715010 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 09:47:15.742252 systemd-journald[1408]: Time spent on flushing to /var/log/journal/ec2a75664138492266bb530e2b94af23 is 71.162ms for 1157 entries. Feb 9 09:47:15.742252 systemd-journald[1408]: System Journal (/var/log/journal/ec2a75664138492266bb530e2b94af23) is 8.0M, max 195.6M, 187.6M free. Feb 9 09:47:15.825420 systemd-journald[1408]: Received client request to flush runtime journal. Feb 9 09:47:15.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.756803 systemd[1]: Finished systemd-random-seed.service. Feb 9 09:47:15.758940 systemd[1]: Reached target first-boot-complete.target. Feb 9 09:47:15.765581 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:47:15.809196 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 09:47:15.815574 systemd[1]: Starting systemd-sysusers.service... Feb 9 09:47:15.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.828514 systemd[1]: Finished systemd-journal-flush.service. Feb 9 09:47:15.872110 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 09:47:15.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:15.876186 systemd[1]: Starting systemd-udev-settle.service... Feb 9 09:47:15.891042 udevadm[1449]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 09:47:15.944634 systemd[1]: Finished systemd-sysusers.service. Feb 9 09:47:15.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:16.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:16.591000 audit: BPF prog-id=24 op=LOAD Feb 9 09:47:16.591000 audit: BPF prog-id=25 op=LOAD Feb 9 09:47:16.592000 audit: BPF prog-id=7 op=UNLOAD Feb 9 09:47:16.592000 audit: BPF prog-id=8 op=UNLOAD Feb 9 09:47:16.589962 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 09:47:16.594034 systemd[1]: Starting systemd-udevd.service... Feb 9 09:47:16.631442 systemd-udevd[1450]: Using default interface naming scheme 'v252'. Feb 9 09:47:16.673093 systemd[1]: Started systemd-udevd.service. Feb 9 09:47:16.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:16.675000 audit: BPF prog-id=26 op=LOAD Feb 9 09:47:16.677804 systemd[1]: Starting systemd-networkd.service... Feb 9 09:47:16.684000 audit: BPF prog-id=27 op=LOAD Feb 9 09:47:16.684000 audit: BPF prog-id=28 op=LOAD Feb 9 09:47:16.684000 audit: BPF prog-id=29 op=LOAD Feb 9 09:47:16.686661 systemd[1]: Starting systemd-userdbd.service... Feb 9 09:47:16.772550 systemd[1]: Started systemd-userdbd.service. Feb 9 09:47:16.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:16.791542 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 09:47:16.807240 (udev-worker)[1464]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:16.907012 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1454) Feb 9 09:47:16.915691 systemd-networkd[1453]: lo: Link UP Feb 9 09:47:16.915718 systemd-networkd[1453]: lo: Gained carrier Feb 9 09:47:16.916637 systemd-networkd[1453]: Enumeration completed Feb 9 09:47:16.916802 systemd[1]: Started systemd-networkd.service. Feb 9 09:47:16.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:16.920743 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 09:47:16.922887 systemd-networkd[1453]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 09:47:16.929022 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:47:16.928999 systemd-networkd[1453]: eth0: Link UP Feb 9 09:47:16.929286 systemd-networkd[1453]: eth0: Gained carrier Feb 9 09:47:16.959247 systemd-networkd[1453]: eth0: DHCPv4 address 172.31.16.31/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 9 09:47:17.133339 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 09:47:17.136260 systemd[1]: Finished systemd-udev-settle.service. Feb 9 09:47:17.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:17.140322 systemd[1]: Starting lvm2-activation-early.service... Feb 9 09:47:17.184347 lvm[1569]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:47:17.221516 systemd[1]: Finished lvm2-activation-early.service. Feb 9 09:47:17.223587 systemd[1]: Reached target cryptsetup.target. Feb 9 09:47:17.227286 systemd[1]: Starting lvm2-activation.service... Feb 9 09:47:17.235568 lvm[1570]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 09:47:17.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:17.269799 systemd[1]: Finished lvm2-activation.service. Feb 9 09:47:17.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:17.271810 systemd[1]: Reached target local-fs-pre.target. Feb 9 09:47:17.273663 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 09:47:17.273709 systemd[1]: Reached target local-fs.target. Feb 9 09:47:17.275503 systemd[1]: Reached target machines.target. Feb 9 09:47:17.290048 systemd[1]: Starting ldconfig.service... Feb 9 09:47:17.292521 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 09:47:17.292817 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:47:17.295210 systemd[1]: Starting systemd-boot-update.service... Feb 9 09:47:17.301112 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 09:47:17.307771 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 09:47:17.309809 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:47:17.309951 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 09:47:17.312333 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 09:47:17.332430 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1572 (bootctl) Feb 9 09:47:17.334735 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 09:47:17.367784 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 09:47:17.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:17.373930 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 09:47:17.382542 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 09:47:17.416258 systemd-tmpfiles[1575]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 09:47:17.438255 systemd-fsck[1582]: fsck.fat 4.2 (2021-01-31) Feb 9 09:47:17.438255 systemd-fsck[1582]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 9 09:47:17.442274 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 09:47:17.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:17.447636 systemd[1]: Mounting boot.mount... Feb 9 09:47:17.474519 systemd[1]: Mounted boot.mount. Feb 9 09:47:17.502713 systemd[1]: Finished systemd-boot-update.service. Feb 9 09:47:17.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:17.759352 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 09:47:17.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:17.764064 systemd[1]: Starting audit-rules.service... Feb 9 09:47:17.769024 systemd[1]: Starting clean-ca-certificates.service... Feb 9 09:47:17.773238 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 09:47:17.775000 audit: BPF prog-id=30 op=LOAD Feb 9 09:47:17.784000 audit: BPF prog-id=31 op=LOAD Feb 9 09:47:17.782331 systemd[1]: Starting systemd-resolved.service... Feb 9 09:47:17.787293 systemd[1]: Starting systemd-timesyncd.service... Feb 9 09:47:17.792929 systemd[1]: Starting systemd-update-utmp.service... Feb 9 09:47:17.858197 systemd[1]: Finished clean-ca-certificates.service. Feb 9 09:47:17.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:17.860303 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 09:47:17.868000 audit[1602]: SYSTEM_BOOT pid=1602 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 09:47:17.872391 systemd[1]: Finished systemd-update-utmp.service. Feb 9 09:47:17.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:17.968285 systemd[1]: Started systemd-timesyncd.service. Feb 9 09:47:17.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:17.970349 systemd[1]: Reached target time-set.target. Feb 9 09:47:18.023225 systemd-resolved[1599]: Positive Trust Anchors: Feb 9 09:47:18.023246 systemd-resolved[1599]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 09:47:18.023297 systemd-resolved[1599]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 09:47:17.658872 systemd-timesyncd[1600]: Contacted time server 23.141.40.124:123 (0.flatcar.pool.ntp.org). Feb 9 09:47:17.804394 systemd-journald[1408]: Time jumped backwards, rotating. Feb 9 09:47:17.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 09:47:17.768000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 09:47:17.768000 audit[1617]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd9f7c7e0 a2=420 a3=0 items=0 ppid=1596 pid=1617 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 09:47:17.768000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 09:47:17.660025 systemd-timesyncd[1600]: Initial clock synchronization to Fri 2024-02-09 09:47:17.658646 UTC. Feb 9 09:47:17.805243 augenrules[1617]: No rules Feb 9 09:47:17.666266 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 09:47:17.770413 systemd[1]: Finished audit-rules.service. Feb 9 09:47:17.776650 systemd-resolved[1599]: Defaulting to hostname 'linux'. Feb 9 09:47:17.779702 systemd[1]: Started systemd-resolved.service. Feb 9 09:47:17.781619 systemd[1]: Reached target network.target. Feb 9 09:47:17.783321 systemd[1]: Reached target nss-lookup.target. Feb 9 09:47:17.812605 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 09:47:17.813859 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 09:47:18.025811 ldconfig[1571]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 09:47:18.031369 systemd[1]: Finished ldconfig.service. Feb 9 09:47:18.035438 systemd[1]: Starting systemd-update-done.service... Feb 9 09:47:18.048927 systemd[1]: Finished systemd-update-done.service. Feb 9 09:47:18.051029 systemd[1]: Reached target sysinit.target. Feb 9 09:47:18.053080 systemd[1]: Started motdgen.path. Feb 9 09:47:18.054855 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 09:47:18.057430 systemd[1]: Started logrotate.timer. Feb 9 09:47:18.059163 systemd[1]: Started mdadm.timer. Feb 9 09:47:18.060670 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 09:47:18.062526 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 09:47:18.062688 systemd[1]: Reached target paths.target. Feb 9 09:47:18.064312 systemd[1]: Reached target timers.target. Feb 9 09:47:18.067365 systemd[1]: Listening on dbus.socket. Feb 9 09:47:18.070919 systemd[1]: Starting docker.socket... Feb 9 09:47:18.077314 systemd[1]: Listening on sshd.socket. Feb 9 09:47:18.079234 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:47:18.080207 systemd[1]: Listening on docker.socket. Feb 9 09:47:18.082066 systemd[1]: Reached target sockets.target. Feb 9 09:47:18.083822 systemd[1]: Reached target basic.target. Feb 9 09:47:18.085504 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:47:18.085682 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 09:47:18.100738 systemd[1]: Starting containerd.service... Feb 9 09:47:18.104754 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 9 09:47:18.109526 systemd[1]: Starting dbus.service... Feb 9 09:47:18.113700 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 09:47:18.122423 systemd[1]: Starting extend-filesystems.service... Feb 9 09:47:18.124109 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 09:47:18.126551 systemd[1]: Starting motdgen.service... Feb 9 09:47:18.131320 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 09:47:18.135274 systemd[1]: Starting prepare-critools.service... Feb 9 09:47:18.139619 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 09:47:18.146251 systemd[1]: Starting sshd-keygen.service... Feb 9 09:47:18.155670 systemd[1]: Starting systemd-logind.service... Feb 9 09:47:18.158009 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 09:47:18.158141 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 09:47:18.159019 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 09:47:18.160684 systemd[1]: Starting update-engine.service... Feb 9 09:47:18.165326 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 09:47:18.214684 jq[1643]: true Feb 9 09:47:18.225918 jq[1629]: false Feb 9 09:47:18.230221 tar[1648]: crictl Feb 9 09:47:18.230953 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 09:47:18.231313 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 09:47:18.234305 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 09:47:18.234701 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 09:47:18.236993 systemd-networkd[1453]: eth0: Gained IPv6LL Feb 9 09:47:18.239722 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 09:47:18.242003 systemd[1]: Reached target network-online.target. Feb 9 09:47:18.246668 systemd[1]: Started amazon-ssm-agent.service. Feb 9 09:47:18.251283 systemd[1]: Started nvidia.service. Feb 9 09:47:18.259053 jq[1657]: true Feb 9 09:47:18.273501 dbus-daemon[1628]: [system] SELinux support is enabled Feb 9 09:47:18.289142 tar[1649]: ./ Feb 9 09:47:18.289142 tar[1649]: ./loopback Feb 9 09:47:18.318650 systemd[1]: Started dbus.service. Feb 9 09:47:18.324751 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 09:47:18.324824 systemd[1]: Reached target system-config.target. Feb 9 09:47:18.326724 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 09:47:18.326758 systemd[1]: Reached target user-config.target. Feb 9 09:47:18.334607 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 09:47:18.335008 systemd[1]: Finished motdgen.service. Feb 9 09:47:18.352109 extend-filesystems[1630]: Found nvme0n1 Feb 9 09:47:18.356977 extend-filesystems[1630]: Found nvme0n1p1 Feb 9 09:47:18.374968 extend-filesystems[1630]: Found nvme0n1p2 Feb 9 09:47:18.376758 extend-filesystems[1630]: Found nvme0n1p3 Feb 9 09:47:18.381970 extend-filesystems[1630]: Found usr Feb 9 09:47:18.413283 extend-filesystems[1630]: Found nvme0n1p4 Feb 9 09:47:18.415336 extend-filesystems[1630]: Found nvme0n1p6 Feb 9 09:47:18.421005 extend-filesystems[1630]: Found nvme0n1p7 Feb 9 09:47:18.423403 dbus-daemon[1628]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.3' (uid=244 pid=1453 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 9 09:47:18.426455 extend-filesystems[1630]: Found nvme0n1p9 Feb 9 09:47:18.433095 systemd[1]: Starting systemd-hostnamed.service... Feb 9 09:47:18.434814 extend-filesystems[1630]: Checking size of /dev/nvme0n1p9 Feb 9 09:47:18.534651 extend-filesystems[1630]: Resized partition /dev/nvme0n1p9 Feb 9 09:47:18.542560 extend-filesystems[1692]: resize2fs 1.46.5 (30-Dec-2021) Feb 9 09:47:18.554082 bash[1689]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:47:18.555554 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 09:47:18.568279 update_engine[1641]: I0209 09:47:18.564999 1641 main.cc:92] Flatcar Update Engine starting Feb 9 09:47:18.582630 systemd[1]: Started update-engine.service. Feb 9 09:47:18.584422 update_engine[1641]: I0209 09:47:18.584386 1641 update_check_scheduler.cc:74] Next update check in 5m59s Feb 9 09:47:18.587209 systemd[1]: Started locksmithd.service. Feb 9 09:47:18.594277 amazon-ssm-agent[1659]: 2024/02/09 09:47:18 Failed to load instance info from vault. RegistrationKey does not exist. Feb 9 09:47:18.596846 amazon-ssm-agent[1659]: Initializing new seelog logger Feb 9 09:47:18.597131 amazon-ssm-agent[1659]: New Seelog Logger Creation Complete Feb 9 09:47:18.597244 amazon-ssm-agent[1659]: 2024/02/09 09:47:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 09:47:18.597244 amazon-ssm-agent[1659]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 9 09:47:18.597572 amazon-ssm-agent[1659]: 2024/02/09 09:47:18 processing appconfig overrides Feb 9 09:47:18.610800 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 9 09:47:18.664703 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 9 09:47:18.686927 env[1647]: time="2024-02-09T09:47:18.686841545Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 09:47:18.688561 extend-filesystems[1692]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 9 09:47:18.688561 extend-filesystems[1692]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 9 09:47:18.688561 extend-filesystems[1692]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 9 09:47:18.707794 extend-filesystems[1630]: Resized filesystem in /dev/nvme0n1p9 Feb 9 09:47:18.710684 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 09:47:18.711098 systemd[1]: Finished extend-filesystems.service. Feb 9 09:47:18.804940 tar[1649]: ./bandwidth Feb 9 09:47:18.826707 systemd-logind[1640]: Watching system buttons on /dev/input/event0 (Power Button) Feb 9 09:47:18.840932 systemd-logind[1640]: New seat seat0. Feb 9 09:47:18.845156 systemd[1]: Started systemd-logind.service. Feb 9 09:47:18.863642 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 09:47:18.942181 env[1647]: time="2024-02-09T09:47:18.942098790Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 09:47:18.950023 env[1647]: time="2024-02-09T09:47:18.949957386Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:47:18.955131 env[1647]: time="2024-02-09T09:47:18.955047990Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:47:18.955131 env[1647]: time="2024-02-09T09:47:18.955119414Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:47:18.955561 env[1647]: time="2024-02-09T09:47:18.955510674Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:47:18.955654 env[1647]: time="2024-02-09T09:47:18.955556442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 09:47:18.955654 env[1647]: time="2024-02-09T09:47:18.955589886Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 09:47:18.955654 env[1647]: time="2024-02-09T09:47:18.955615446Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 09:47:18.955825 env[1647]: time="2024-02-09T09:47:18.955792326Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:47:18.956302 env[1647]: time="2024-02-09T09:47:18.956253078Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 09:47:18.956592 env[1647]: time="2024-02-09T09:47:18.956543934Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 09:47:18.956688 env[1647]: time="2024-02-09T09:47:18.956590014Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 09:47:18.956787 env[1647]: time="2024-02-09T09:47:18.956736966Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 09:47:18.984946 env[1647]: time="2024-02-09T09:47:18.984854383Z" level=info msg="metadata content store policy set" policy=shared Feb 9 09:47:18.997915 env[1647]: time="2024-02-09T09:47:18.997841611Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 09:47:18.998060 env[1647]: time="2024-02-09T09:47:18.997920223Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 09:47:18.998060 env[1647]: time="2024-02-09T09:47:18.997954495Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 09:47:18.998060 env[1647]: time="2024-02-09T09:47:18.998029399Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 09:47:18.998240 env[1647]: time="2024-02-09T09:47:18.998067283Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 09:47:18.998240 env[1647]: time="2024-02-09T09:47:18.998101279Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 09:47:18.998240 env[1647]: time="2024-02-09T09:47:18.998132563Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 09:47:18.998745 env[1647]: time="2024-02-09T09:47:18.998687527Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 09:47:18.998874 env[1647]: time="2024-02-09T09:47:18.998746711Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 09:47:18.998874 env[1647]: time="2024-02-09T09:47:18.998810131Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 09:47:18.998874 env[1647]: time="2024-02-09T09:47:18.998848207Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 09:47:18.999042 env[1647]: time="2024-02-09T09:47:18.998879287Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 09:47:18.999138 env[1647]: time="2024-02-09T09:47:18.999096991Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 09:47:18.999326 env[1647]: time="2024-02-09T09:47:18.999287047Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 09:47:18.999761 env[1647]: time="2024-02-09T09:47:18.999720391Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 09:47:19.000169 env[1647]: time="2024-02-09T09:47:19.000120831Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 09:47:19.000276 env[1647]: time="2024-02-09T09:47:19.000177039Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 09:47:19.000486 env[1647]: time="2024-02-09T09:47:19.000292239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 09:47:19.000486 env[1647]: time="2024-02-09T09:47:19.000355815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 09:47:19.000486 env[1647]: time="2024-02-09T09:47:19.000406275Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 09:47:19.000486 env[1647]: time="2024-02-09T09:47:19.000437679Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 09:47:19.000486 env[1647]: time="2024-02-09T09:47:19.000468075Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 09:47:19.000753 env[1647]: time="2024-02-09T09:47:19.000501915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 09:47:19.000753 env[1647]: time="2024-02-09T09:47:19.000553239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 09:47:19.000753 env[1647]: time="2024-02-09T09:47:19.000583575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 09:47:19.000753 env[1647]: time="2024-02-09T09:47:19.000617619Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 09:47:19.001010 env[1647]: time="2024-02-09T09:47:19.000926403Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 09:47:19.001010 env[1647]: time="2024-02-09T09:47:19.000961695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 09:47:19.001010 env[1647]: time="2024-02-09T09:47:19.000991239Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 09:47:19.001151 env[1647]: time="2024-02-09T09:47:19.001021263Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 09:47:19.001151 env[1647]: time="2024-02-09T09:47:19.001056879Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 09:47:19.001151 env[1647]: time="2024-02-09T09:47:19.001084479Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 09:47:19.001151 env[1647]: time="2024-02-09T09:47:19.001122423Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 09:47:19.001340 env[1647]: time="2024-02-09T09:47:19.001187247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 09:47:19.001662 env[1647]: time="2024-02-09T09:47:19.001559079Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 09:47:19.002687 env[1647]: time="2024-02-09T09:47:19.001667319Z" level=info msg="Connect containerd service" Feb 9 09:47:19.002687 env[1647]: time="2024-02-09T09:47:19.001724787Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 09:47:19.006956 env[1647]: time="2024-02-09T09:47:19.006318747Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:47:19.010418 env[1647]: time="2024-02-09T09:47:19.007387143Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 09:47:19.010418 env[1647]: time="2024-02-09T09:47:19.007506831Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 09:47:19.010418 env[1647]: time="2024-02-09T09:47:19.007604079Z" level=info msg="containerd successfully booted in 0.344614s" Feb 9 09:47:19.007722 systemd[1]: Started containerd.service. Feb 9 09:47:19.016870 dbus-daemon[1628]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 9 09:47:19.017134 systemd[1]: Started systemd-hostnamed.service. Feb 9 09:47:19.022484 dbus-daemon[1628]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1682 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 9 09:47:19.027125 systemd[1]: Starting polkit.service... Feb 9 09:47:19.029897 env[1647]: time="2024-02-09T09:47:19.029589303Z" level=info msg="Start subscribing containerd event" Feb 9 09:47:19.030196 env[1647]: time="2024-02-09T09:47:19.030158919Z" level=info msg="Start recovering state" Feb 9 09:47:19.040184 env[1647]: time="2024-02-09T09:47:19.040114911Z" level=info msg="Start event monitor" Feb 9 09:47:19.041291 env[1647]: time="2024-02-09T09:47:19.041239347Z" level=info msg="Start snapshots syncer" Feb 9 09:47:19.042114 env[1647]: time="2024-02-09T09:47:19.042051399Z" level=info msg="Start cni network conf syncer for default" Feb 9 09:47:19.043527 env[1647]: time="2024-02-09T09:47:19.043474611Z" level=info msg="Start streaming server" Feb 9 09:47:19.077298 polkitd[1757]: Started polkitd version 121 Feb 9 09:47:19.102144 tar[1649]: ./ptp Feb 9 09:47:19.104910 polkitd[1757]: Loading rules from directory /etc/polkit-1/rules.d Feb 9 09:47:19.106900 polkitd[1757]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 9 09:47:19.113379 polkitd[1757]: Finished loading, compiling and executing 2 rules Feb 9 09:47:19.114555 dbus-daemon[1628]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 9 09:47:19.114856 systemd[1]: Started polkit.service. Feb 9 09:47:19.117268 polkitd[1757]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 9 09:47:19.152257 systemd-hostnamed[1682]: Hostname set to (transient) Feb 9 09:47:19.152435 systemd-resolved[1599]: System hostname changed to 'ip-172-31-16-31'. Feb 9 09:47:19.350110 tar[1649]: ./vlan Feb 9 09:47:19.377297 coreos-metadata[1627]: Feb 09 09:47:19.376 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 09:47:19.380782 coreos-metadata[1627]: Feb 09 09:47:19.380 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 9 09:47:19.381739 coreos-metadata[1627]: Feb 09 09:47:19.381 INFO Fetch successful Feb 9 09:47:19.381739 coreos-metadata[1627]: Feb 09 09:47:19.381 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 9 09:47:19.383107 coreos-metadata[1627]: Feb 09 09:47:19.382 INFO Fetch successful Feb 9 09:47:19.385533 unknown[1627]: wrote ssh authorized keys file for user: core Feb 9 09:47:19.405973 update-ssh-keys[1803]: Updated "/home/core/.ssh/authorized_keys" Feb 9 09:47:19.406696 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 9 09:47:19.502416 tar[1649]: ./host-device Feb 9 09:47:19.600015 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO Create new startup processor Feb 9 09:47:19.601834 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [LongRunningPluginsManager] registered plugins: {} Feb 9 09:47:19.602222 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO Initializing bookkeeping folders Feb 9 09:47:19.602366 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO removing the completed state files Feb 9 09:47:19.602649 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO Initializing bookkeeping folders for long running plugins Feb 9 09:47:19.602803 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 9 09:47:19.603046 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO Initializing healthcheck folders for long running plugins Feb 9 09:47:19.603182 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO Initializing locations for inventory plugin Feb 9 09:47:19.603428 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO Initializing default location for custom inventory Feb 9 09:47:19.603552 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO Initializing default location for file inventory Feb 9 09:47:19.603805 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO Initializing default location for role inventory Feb 9 09:47:19.604049 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO Init the cloudwatchlogs publisher Feb 9 09:47:19.604164 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [instanceID=i-0be606c71254b3c1d] Successfully loaded platform independent plugin aws:softwareInventory Feb 9 09:47:19.604423 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [instanceID=i-0be606c71254b3c1d] Successfully loaded platform independent plugin aws:runDockerAction Feb 9 09:47:19.604570 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [instanceID=i-0be606c71254b3c1d] Successfully loaded platform independent plugin aws:refreshAssociation Feb 9 09:47:19.611588 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [instanceID=i-0be606c71254b3c1d] Successfully loaded platform independent plugin aws:configurePackage Feb 9 09:47:19.611726 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [instanceID=i-0be606c71254b3c1d] Successfully loaded platform independent plugin aws:downloadContent Feb 9 09:47:19.611726 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [instanceID=i-0be606c71254b3c1d] Successfully loaded platform independent plugin aws:runDocument Feb 9 09:47:19.611726 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [instanceID=i-0be606c71254b3c1d] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 9 09:47:19.611927 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [instanceID=i-0be606c71254b3c1d] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 9 09:47:19.611927 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [instanceID=i-0be606c71254b3c1d] Successfully loaded platform independent plugin aws:configureDocker Feb 9 09:47:19.611927 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [instanceID=i-0be606c71254b3c1d] Successfully loaded platform dependent plugin aws:runShellScript Feb 9 09:47:19.612068 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 9 09:47:19.612068 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO OS: linux, Arch: arm64 Feb 9 09:47:19.617355 amazon-ssm-agent[1659]: datastore file /var/lib/amazon/ssm/i-0be606c71254b3c1d/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 9 09:47:19.651575 tar[1649]: ./tuning Feb 9 09:47:19.700728 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessagingDeliveryService] Starting document processing engine... Feb 9 09:47:19.757485 tar[1649]: ./vrf Feb 9 09:47:19.795701 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 9 09:47:19.864727 tar[1649]: ./sbr Feb 9 09:47:19.890023 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 9 09:47:19.921759 tar[1649]: ./tap Feb 9 09:47:19.984549 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessagingDeliveryService] Starting message polling Feb 9 09:47:20.024564 tar[1649]: ./dhcp Feb 9 09:47:20.079280 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 9 09:47:20.174399 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [instanceID=i-0be606c71254b3c1d] Starting association polling Feb 9 09:47:20.269500 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 9 09:47:20.281842 tar[1649]: ./static Feb 9 09:47:20.329486 systemd[1]: Finished prepare-critools.service. Feb 9 09:47:20.345667 tar[1649]: ./firewall Feb 9 09:47:20.364827 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 9 09:47:20.416668 tar[1649]: ./macvlan Feb 9 09:47:20.460370 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 9 09:47:20.482010 tar[1649]: ./dummy Feb 9 09:47:20.546197 tar[1649]: ./bridge Feb 9 09:47:20.556038 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 9 09:47:20.615903 tar[1649]: ./ipvlan Feb 9 09:47:20.651956 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 9 09:47:20.679925 tar[1649]: ./portmap Feb 9 09:47:20.741298 tar[1649]: ./host-local Feb 9 09:47:20.748053 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessageGatewayService] Starting session document processing engine... Feb 9 09:47:20.815313 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 09:47:20.827824 locksmithd[1697]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 09:47:20.844329 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 9 09:47:20.940791 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 9 09:47:21.037452 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0be606c71254b3c1d, requestId: 1a6b53d7-16f8-42aa-973f-d16f0781926d Feb 9 09:47:21.134364 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [OfflineService] Starting document processing engine... Feb 9 09:47:21.231473 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [OfflineService] [EngineProcessor] Starting Feb 9 09:47:21.328739 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [OfflineService] [EngineProcessor] Initial processing Feb 9 09:47:21.426096 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [OfflineService] Starting message polling Feb 9 09:47:21.523754 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [OfflineService] Starting send replies to MDS Feb 9 09:47:21.621669 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 9 09:47:21.719660 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 09:47:21.817880 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 9 09:47:21.916390 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessageGatewayService] listening reply. Feb 9 09:47:22.014953 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 9 09:47:22.113720 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [StartupProcessor] Executing startup processor tasks Feb 9 09:47:22.212798 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 9 09:47:22.312001 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 9 09:47:22.411378 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 9 09:47:22.473475 sshd_keygen[1667]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 09:47:22.509437 systemd[1]: Finished sshd-keygen.service. Feb 9 09:47:22.512029 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0be606c71254b3c1d?role=subscribe&stream=input Feb 9 09:47:22.513989 systemd[1]: Starting issuegen.service... Feb 9 09:47:22.524192 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 09:47:22.524567 systemd[1]: Finished issuegen.service. Feb 9 09:47:22.528937 systemd[1]: Starting systemd-user-sessions.service... Feb 9 09:47:22.542600 systemd[1]: Finished systemd-user-sessions.service. Feb 9 09:47:22.547667 systemd[1]: Started getty@tty1.service. Feb 9 09:47:22.552157 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 09:47:22.554319 systemd[1]: Reached target getty.target. Feb 9 09:47:22.556087 systemd[1]: Reached target multi-user.target. Feb 9 09:47:22.560386 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 09:47:22.575327 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 09:47:22.575684 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 09:47:22.578197 systemd[1]: Startup finished in 1.128s (kernel) + 10.368s (initrd) + 12.075s (userspace) = 23.572s. Feb 9 09:47:22.611845 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0be606c71254b3c1d?role=subscribe&stream=input Feb 9 09:47:22.711886 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessageGatewayService] Starting receiving message from control channel Feb 9 09:47:22.812135 amazon-ssm-agent[1659]: 2024-02-09 09:47:19 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 9 09:47:26.612725 amazon-ssm-agent[1659]: 2024-02-09 09:47:26 INFO [HealthCheck] HealthCheck reporting agent health. Feb 9 09:47:28.023855 systemd[1]: Created slice system-sshd.slice. Feb 9 09:47:28.026181 systemd[1]: Started sshd@0-172.31.16.31:22-139.178.89.65:59316.service. Feb 9 09:47:28.208200 sshd[1842]: Accepted publickey for core from 139.178.89.65 port 59316 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:28.211939 sshd[1842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:28.228735 systemd[1]: Created slice user-500.slice. Feb 9 09:47:28.231727 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 09:47:28.240936 systemd-logind[1640]: New session 1 of user core. Feb 9 09:47:28.248877 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 09:47:28.252005 systemd[1]: Starting user@500.service... Feb 9 09:47:28.258876 (systemd)[1845]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:28.429977 systemd[1845]: Queued start job for default target default.target. Feb 9 09:47:28.431005 systemd[1845]: Reached target paths.target. Feb 9 09:47:28.431058 systemd[1845]: Reached target sockets.target. Feb 9 09:47:28.431090 systemd[1845]: Reached target timers.target. Feb 9 09:47:28.431119 systemd[1845]: Reached target basic.target. Feb 9 09:47:28.431211 systemd[1845]: Reached target default.target. Feb 9 09:47:28.431274 systemd[1845]: Startup finished in 161ms. Feb 9 09:47:28.431930 systemd[1]: Started user@500.service. Feb 9 09:47:28.434007 systemd[1]: Started session-1.scope. Feb 9 09:47:28.579492 systemd[1]: Started sshd@1-172.31.16.31:22-139.178.89.65:51140.service. Feb 9 09:47:28.756547 sshd[1854]: Accepted publickey for core from 139.178.89.65 port 51140 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:28.759084 sshd[1854]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:28.766864 systemd-logind[1640]: New session 2 of user core. Feb 9 09:47:28.768129 systemd[1]: Started session-2.scope. Feb 9 09:47:28.898286 sshd[1854]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:28.903230 systemd-logind[1640]: Session 2 logged out. Waiting for processes to exit. Feb 9 09:47:28.903872 systemd[1]: sshd@1-172.31.16.31:22-139.178.89.65:51140.service: Deactivated successfully. Feb 9 09:47:28.905133 systemd[1]: session-2.scope: Deactivated successfully. Feb 9 09:47:28.906593 systemd-logind[1640]: Removed session 2. Feb 9 09:47:28.925702 systemd[1]: Started sshd@2-172.31.16.31:22-139.178.89.65:51150.service. Feb 9 09:47:29.096977 sshd[1860]: Accepted publickey for core from 139.178.89.65 port 51150 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:29.099391 sshd[1860]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:29.107982 systemd-logind[1640]: New session 3 of user core. Feb 9 09:47:29.108122 systemd[1]: Started session-3.scope. Feb 9 09:47:29.229617 sshd[1860]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:29.235398 systemd-logind[1640]: Session 3 logged out. Waiting for processes to exit. Feb 9 09:47:29.235823 systemd[1]: sshd@2-172.31.16.31:22-139.178.89.65:51150.service: Deactivated successfully. Feb 9 09:47:29.237053 systemd[1]: session-3.scope: Deactivated successfully. Feb 9 09:47:29.238528 systemd-logind[1640]: Removed session 3. Feb 9 09:47:29.258755 systemd[1]: Started sshd@3-172.31.16.31:22-139.178.89.65:51158.service. Feb 9 09:47:29.433903 sshd[1866]: Accepted publickey for core from 139.178.89.65 port 51158 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:29.436854 sshd[1866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:29.445286 systemd[1]: Started session-4.scope. Feb 9 09:47:29.446082 systemd-logind[1640]: New session 4 of user core. Feb 9 09:47:29.576714 sshd[1866]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:29.581972 systemd-logind[1640]: Session 4 logged out. Waiting for processes to exit. Feb 9 09:47:29.582130 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 09:47:29.583233 systemd[1]: sshd@3-172.31.16.31:22-139.178.89.65:51158.service: Deactivated successfully. Feb 9 09:47:29.585197 systemd-logind[1640]: Removed session 4. Feb 9 09:47:29.604893 systemd[1]: Started sshd@4-172.31.16.31:22-139.178.89.65:51172.service. Feb 9 09:47:29.781362 sshd[1872]: Accepted publickey for core from 139.178.89.65 port 51172 ssh2: RSA SHA256:1++YWC0h0fEpfkRPeemtMi9ARVJF0YKl/HjB0qv5R1M Feb 9 09:47:29.782422 sshd[1872]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 09:47:29.789853 systemd-logind[1640]: New session 5 of user core. Feb 9 09:47:29.790685 systemd[1]: Started session-5.scope. Feb 9 09:47:29.907347 sudo[1875]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 09:47:29.907877 sudo[1875]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 09:47:30.546697 systemd[1]: Reloading. Feb 9 09:47:30.637836 /usr/lib/systemd/system-generators/torcx-generator[1907]: time="2024-02-09T09:47:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:47:30.637898 /usr/lib/systemd/system-generators/torcx-generator[1907]: time="2024-02-09T09:47:30Z" level=info msg="torcx already run" Feb 9 09:47:30.725347 amazon-ssm-agent[1659]: 2024-02-09 09:47:30 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 9 09:47:30.835402 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:47:30.835442 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:47:30.873704 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:47:31.072117 systemd[1]: Started kubelet.service. Feb 9 09:47:31.095101 systemd[1]: Starting coreos-metadata.service... Feb 9 09:47:31.182632 kubelet[1959]: E0209 09:47:31.182525 1959 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 09:47:31.186935 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 09:47:31.187263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 09:47:31.263448 coreos-metadata[1967]: Feb 09 09:47:31.262 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 9 09:47:31.264883 coreos-metadata[1967]: Feb 09 09:47:31.264 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-id: Attempt #1 Feb 9 09:47:31.265509 coreos-metadata[1967]: Feb 09 09:47:31.265 INFO Fetch successful Feb 9 09:47:31.266019 coreos-metadata[1967]: Feb 09 09:47:31.265 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/instance-type: Attempt #1 Feb 9 09:47:31.266510 coreos-metadata[1967]: Feb 09 09:47:31.266 INFO Fetch successful Feb 9 09:47:31.266983 coreos-metadata[1967]: Feb 09 09:47:31.266 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/local-ipv4: Attempt #1 Feb 9 09:47:31.267469 coreos-metadata[1967]: Feb 09 09:47:31.267 INFO Fetch successful Feb 9 09:47:31.267884 coreos-metadata[1967]: Feb 09 09:47:31.267 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-ipv4: Attempt #1 Feb 9 09:47:31.268349 coreos-metadata[1967]: Feb 09 09:47:31.268 INFO Fetch successful Feb 9 09:47:31.268885 coreos-metadata[1967]: Feb 09 09:47:31.268 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/placement/availability-zone: Attempt #1 Feb 9 09:47:31.269461 coreos-metadata[1967]: Feb 09 09:47:31.268 INFO Fetch successful Feb 9 09:47:31.269724 coreos-metadata[1967]: Feb 09 09:47:31.269 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/hostname: Attempt #1 Feb 9 09:47:31.270011 coreos-metadata[1967]: Feb 09 09:47:31.269 INFO Fetch successful Feb 9 09:47:31.270263 coreos-metadata[1967]: Feb 09 09:47:31.270 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-hostname: Attempt #1 Feb 9 09:47:31.270544 coreos-metadata[1967]: Feb 09 09:47:31.270 INFO Fetch successful Feb 9 09:47:31.270827 coreos-metadata[1967]: Feb 09 09:47:31.270 INFO Fetching http://169.254.169.254/2019-10-01/dynamic/instance-identity/document: Attempt #1 Feb 9 09:47:31.271116 coreos-metadata[1967]: Feb 09 09:47:31.270 INFO Fetch successful Feb 9 09:47:31.282957 systemd[1]: Finished coreos-metadata.service. Feb 9 09:47:31.682627 systemd[1]: Stopped kubelet.service. Feb 9 09:47:31.713510 systemd[1]: Reloading. Feb 9 09:47:31.840327 /usr/lib/systemd/system-generators/torcx-generator[2024]: time="2024-02-09T09:47:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 09:47:31.840395 /usr/lib/systemd/system-generators/torcx-generator[2024]: time="2024-02-09T09:47:31Z" level=info msg="torcx already run" Feb 9 09:47:31.987273 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 09:47:31.987311 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 09:47:32.025812 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 09:47:32.231456 systemd[1]: Started kubelet.service. Feb 9 09:47:32.325636 kubelet[2080]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:47:32.325636 kubelet[2080]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 09:47:32.325636 kubelet[2080]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 09:47:32.326379 kubelet[2080]: I0209 09:47:32.325884 2080 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 09:47:33.432633 kubelet[2080]: I0209 09:47:33.432576 2080 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 09:47:33.432633 kubelet[2080]: I0209 09:47:33.432622 2080 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 09:47:33.433363 kubelet[2080]: I0209 09:47:33.432986 2080 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 09:47:33.437533 kubelet[2080]: I0209 09:47:33.436682 2080 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 09:47:33.444698 kubelet[2080]: W0209 09:47:33.444657 2080 machine.go:65] Cannot read vendor id correctly, set empty. Feb 9 09:47:33.445790 kubelet[2080]: I0209 09:47:33.445723 2080 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 09:47:33.446206 kubelet[2080]: I0209 09:47:33.446171 2080 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 09:47:33.446448 kubelet[2080]: I0209 09:47:33.446414 2080 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 09:47:33.446598 kubelet[2080]: I0209 09:47:33.446459 2080 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 09:47:33.446598 kubelet[2080]: I0209 09:47:33.446480 2080 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 09:47:33.446729 kubelet[2080]: I0209 09:47:33.446632 2080 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:47:33.446870 kubelet[2080]: I0209 09:47:33.446838 2080 kubelet.go:393] "Attempting to sync node with API server" Feb 9 09:47:33.446942 kubelet[2080]: I0209 09:47:33.446876 2080 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 09:47:33.446942 kubelet[2080]: I0209 09:47:33.446914 2080 kubelet.go:309] "Adding apiserver pod source" Feb 9 09:47:33.447070 kubelet[2080]: I0209 09:47:33.446945 2080 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 09:47:33.448114 kubelet[2080]: I0209 09:47:33.448081 2080 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 09:47:33.448454 kubelet[2080]: E0209 09:47:33.448376 2080 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:33.448559 kubelet[2080]: E0209 09:47:33.448523 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:33.448982 kubelet[2080]: W0209 09:47:33.448958 2080 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 09:47:33.450151 kubelet[2080]: I0209 09:47:33.450124 2080 server.go:1232] "Started kubelet" Feb 9 09:47:33.451866 kubelet[2080]: E0209 09:47:33.451819 2080 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 09:47:33.451866 kubelet[2080]: E0209 09:47:33.451867 2080 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 09:47:33.456518 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 09:47:33.456655 kubelet[2080]: I0209 09:47:33.454674 2080 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 09:47:33.456655 kubelet[2080]: I0209 09:47:33.455100 2080 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 09:47:33.456655 kubelet[2080]: I0209 09:47:33.455174 2080 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 09:47:33.456655 kubelet[2080]: I0209 09:47:33.456234 2080 server.go:462] "Adding debug handlers to kubelet server" Feb 9 09:47:33.459151 kubelet[2080]: I0209 09:47:33.459098 2080 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 09:47:33.476574 kubelet[2080]: I0209 09:47:33.476520 2080 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 09:47:33.480951 kubelet[2080]: I0209 09:47:33.479947 2080 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 09:47:33.480951 kubelet[2080]: I0209 09:47:33.480334 2080 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 09:47:33.483182 kubelet[2080]: W0209 09:47:33.483125 2080 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:47:33.483182 kubelet[2080]: E0209 09:47:33.483183 2080 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 09:47:33.483417 kubelet[2080]: E0209 09:47:33.483236 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c939cc3ce6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 450063078, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 450063078, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:33.483730 kubelet[2080]: W0209 09:47:33.483685 2080 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:47:33.483730 kubelet[2080]: E0209 09:47:33.483729 2080 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 09:47:33.483951 kubelet[2080]: E0209 09:47:33.483924 2080 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.16.31\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 9 09:47:33.485961 kubelet[2080]: E0209 09:47:33.485165 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c939e78522", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 451851042, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 451851042, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:33.485961 kubelet[2080]: W0209 09:47:33.485655 2080 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.16.31" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:47:33.485961 kubelet[2080]: E0209 09:47:33.485688 2080 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.16.31" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:47:33.535656 kubelet[2080]: I0209 09:47:33.535611 2080 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 09:47:33.535935 kubelet[2080]: I0209 09:47:33.535913 2080 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 09:47:33.536084 kubelet[2080]: I0209 09:47:33.536063 2080 state_mem.go:36] "Initialized new in-memory state store" Feb 9 09:47:33.537171 kubelet[2080]: E0209 09:47:33.536931 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c93ed39a2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.31 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534431791, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534431791, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:33.539037 kubelet[2080]: E0209 09:47:33.538901 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c93ed3e6a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.31 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534451363, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534451363, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:33.539938 kubelet[2080]: I0209 09:47:33.539905 2080 policy_none.go:49] "None policy: Start" Feb 9 09:47:33.541096 kubelet[2080]: E0209 09:47:33.540897 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c93ed3fbdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.31 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534456799, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534456799, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:33.542515 kubelet[2080]: I0209 09:47:33.542484 2080 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 09:47:33.542749 kubelet[2080]: I0209 09:47:33.542728 2080 state_mem.go:35] "Initializing new in-memory state store" Feb 9 09:47:33.554681 systemd[1]: Created slice kubepods.slice. Feb 9 09:47:33.564323 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 09:47:33.569761 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 09:47:33.579199 kubelet[2080]: I0209 09:47:33.578037 2080 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.31" Feb 9 09:47:33.579892 kubelet[2080]: E0209 09:47:33.579854 2080 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.31" Feb 9 09:47:33.580225 kubelet[2080]: I0209 09:47:33.580192 2080 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 09:47:33.581619 kubelet[2080]: I0209 09:47:33.581574 2080 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 09:47:33.585341 kubelet[2080]: E0209 09:47:33.584672 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c93ed39a2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.31 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534431791, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 577984087, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events "172.31.16.31.17b228c93ed39a2f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:33.585924 kubelet[2080]: E0209 09:47:33.585884 2080 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172.31.16.31\" not found" Feb 9 09:47:33.586250 kubelet[2080]: E0209 09:47:33.586133 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c93ed3e6a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.31 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534451363, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 577991623, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events "172.31.16.31.17b228c93ed3e6a3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:33.595357 kubelet[2080]: E0209 09:47:33.587894 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c93ed3fbdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.31 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534456799, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 577995775, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events "172.31.16.31.17b228c93ed3fbdf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:33.596679 kubelet[2080]: E0209 09:47:33.596544 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c941eeb7ab", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 586540459, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 586540459, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:33.626468 kubelet[2080]: I0209 09:47:33.626409 2080 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 09:47:33.628714 kubelet[2080]: I0209 09:47:33.628668 2080 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 09:47:33.628915 kubelet[2080]: I0209 09:47:33.628721 2080 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 09:47:33.628915 kubelet[2080]: I0209 09:47:33.628752 2080 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 09:47:33.628915 kubelet[2080]: E0209 09:47:33.628865 2080 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 09:47:33.630930 kubelet[2080]: W0209 09:47:33.630886 2080 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:47:33.630930 kubelet[2080]: E0209 09:47:33.630938 2080 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 09:47:33.686021 kubelet[2080]: E0209 09:47:33.685906 2080 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.16.31\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 9 09:47:33.781488 kubelet[2080]: I0209 09:47:33.781456 2080 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.31" Feb 9 09:47:33.783614 kubelet[2080]: E0209 09:47:33.783564 2080 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.31" Feb 9 09:47:33.783761 kubelet[2080]: E0209 09:47:33.783630 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c93ed39a2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.31 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534431791, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 781367912, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events "172.31.16.31.17b228c93ed39a2f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:33.785073 kubelet[2080]: E0209 09:47:33.784963 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c93ed3e6a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.31 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534451363, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 781375496, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events "172.31.16.31.17b228c93ed3e6a3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:33.786368 kubelet[2080]: E0209 09:47:33.786230 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c93ed3fbdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.31 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534456799, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 781379564, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events "172.31.16.31.17b228c93ed3fbdf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:34.088328 kubelet[2080]: E0209 09:47:34.088282 2080 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"172.31.16.31\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 9 09:47:34.186090 kubelet[2080]: I0209 09:47:34.185653 2080 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.31" Feb 9 09:47:34.187178 kubelet[2080]: E0209 09:47:34.187066 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c93ed39a2f", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 172.31.16.31 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534431791, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 34, 185587278, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events "172.31.16.31.17b228c93ed39a2f" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:34.187848 kubelet[2080]: E0209 09:47:34.187813 2080 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="172.31.16.31" Feb 9 09:47:34.188623 kubelet[2080]: E0209 09:47:34.188512 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c93ed3e6a3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 172.31.16.31 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534451363, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 34, 185613906, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events "172.31.16.31.17b228c93ed3e6a3" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:34.189822 kubelet[2080]: E0209 09:47:34.189691 2080 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"172.31.16.31.17b228c93ed3fbdf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"172.31.16.31", UID:"172.31.16.31", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 172.31.16.31 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"172.31.16.31"}, FirstTimestamp:time.Date(2024, time.February, 9, 9, 47, 33, 534456799, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 9, 47, 34, 185618694, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"172.31.16.31"}': 'events "172.31.16.31.17b228c93ed3fbdf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 09:47:34.324013 kubelet[2080]: W0209 09:47:34.323980 2080 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "172.31.16.31" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:47:34.324223 kubelet[2080]: E0209 09:47:34.324203 2080 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "172.31.16.31" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 09:47:34.435638 kubelet[2080]: I0209 09:47:34.435536 2080 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 09:47:34.448997 kubelet[2080]: E0209 09:47:34.448946 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:34.853429 kubelet[2080]: E0209 09:47:34.853380 2080 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "172.31.16.31" not found Feb 9 09:47:34.894665 kubelet[2080]: E0209 09:47:34.894633 2080 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172.31.16.31\" not found" node="172.31.16.31" Feb 9 09:47:34.989597 kubelet[2080]: I0209 09:47:34.989553 2080 kubelet_node_status.go:70] "Attempting to register node" node="172.31.16.31" Feb 9 09:47:34.997223 kubelet[2080]: I0209 09:47:34.997171 2080 kubelet_node_status.go:73] "Successfully registered node" node="172.31.16.31" Feb 9 09:47:35.026657 kubelet[2080]: E0209 09:47:35.026582 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:35.127657 kubelet[2080]: E0209 09:47:35.127537 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:35.228094 kubelet[2080]: E0209 09:47:35.228061 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:35.258359 sudo[1875]: pam_unix(sudo:session): session closed for user root Feb 9 09:47:35.282734 sshd[1872]: pam_unix(sshd:session): session closed for user core Feb 9 09:47:35.287069 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 09:47:35.288828 systemd[1]: sshd@4-172.31.16.31:22-139.178.89.65:51172.service: Deactivated successfully. Feb 9 09:47:35.288909 systemd-logind[1640]: Session 5 logged out. Waiting for processes to exit. Feb 9 09:47:35.291226 systemd-logind[1640]: Removed session 5. Feb 9 09:47:35.329400 kubelet[2080]: E0209 09:47:35.329345 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:35.430590 kubelet[2080]: E0209 09:47:35.429974 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:35.449438 kubelet[2080]: E0209 09:47:35.449397 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:35.531036 kubelet[2080]: E0209 09:47:35.530992 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:35.632045 kubelet[2080]: E0209 09:47:35.631997 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:35.733565 kubelet[2080]: E0209 09:47:35.733098 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:35.834787 kubelet[2080]: E0209 09:47:35.834724 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:35.935368 kubelet[2080]: E0209 09:47:35.935342 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:36.036349 kubelet[2080]: E0209 09:47:36.035993 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:36.136660 kubelet[2080]: E0209 09:47:36.136630 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:36.237294 kubelet[2080]: E0209 09:47:36.237263 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:36.337944 kubelet[2080]: E0209 09:47:36.337918 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:36.438694 kubelet[2080]: E0209 09:47:36.438652 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:36.449906 kubelet[2080]: E0209 09:47:36.449882 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:36.539490 kubelet[2080]: E0209 09:47:36.539448 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:36.640314 kubelet[2080]: E0209 09:47:36.639851 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:36.741063 kubelet[2080]: E0209 09:47:36.741033 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:36.841758 kubelet[2080]: E0209 09:47:36.841735 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:36.943041 kubelet[2080]: E0209 09:47:36.942634 2080 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"172.31.16.31\" not found" Feb 9 09:47:37.043783 kubelet[2080]: I0209 09:47:37.043724 2080 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 09:47:37.044524 env[1647]: time="2024-02-09T09:47:37.044300216Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 09:47:37.045115 kubelet[2080]: I0209 09:47:37.044673 2080 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 09:47:37.450439 kubelet[2080]: I0209 09:47:37.450379 2080 apiserver.go:52] "Watching apiserver" Feb 9 09:47:37.451071 kubelet[2080]: E0209 09:47:37.450503 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:37.454212 kubelet[2080]: I0209 09:47:37.454148 2080 topology_manager.go:215] "Topology Admit Handler" podUID="5294c368-c4e6-413a-938e-b6887f2e77e5" podNamespace="kube-system" podName="kube-proxy-ng44z" Feb 9 09:47:37.454367 kubelet[2080]: I0209 09:47:37.454291 2080 topology_manager.go:215] "Topology Admit Handler" podUID="2014d963-9ed4-4a31-8855-de9cfcc2a7c5" podNamespace="kube-system" podName="cilium-6tnrm" Feb 9 09:47:37.465591 systemd[1]: Created slice kubepods-burstable-pod2014d963_9ed4_4a31_8855_de9cfcc2a7c5.slice. Feb 9 09:47:37.477213 kubelet[2080]: I0209 09:47:37.477175 2080 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 09:47:37.485233 systemd[1]: Created slice kubepods-besteffort-pod5294c368_c4e6_413a_938e_b6887f2e77e5.slice. Feb 9 09:47:37.504287 kubelet[2080]: I0209 09:47:37.504228 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cilium-cgroup\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.504494 kubelet[2080]: I0209 09:47:37.504304 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cilium-config-path\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.504494 kubelet[2080]: I0209 09:47:37.504352 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cilium-run\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.504494 kubelet[2080]: I0209 09:47:37.504399 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-etc-cni-netd\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.504494 kubelet[2080]: I0209 09:47:37.504442 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-xtables-lock\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.504494 kubelet[2080]: I0209 09:47:37.504487 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-host-proc-sys-net\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.504862 kubelet[2080]: I0209 09:47:37.504545 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-bpf-maps\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.504862 kubelet[2080]: I0209 09:47:37.504589 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cni-path\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.504862 kubelet[2080]: I0209 09:47:37.504632 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-host-proc-sys-kernel\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.504862 kubelet[2080]: I0209 09:47:37.504676 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbbhq\" (UniqueName: \"kubernetes.io/projected/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-kube-api-access-lbbhq\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.504862 kubelet[2080]: I0209 09:47:37.504726 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5294c368-c4e6-413a-938e-b6887f2e77e5-xtables-lock\") pod \"kube-proxy-ng44z\" (UID: \"5294c368-c4e6-413a-938e-b6887f2e77e5\") " pod="kube-system/kube-proxy-ng44z" Feb 9 09:47:37.504862 kubelet[2080]: I0209 09:47:37.504790 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5294c368-c4e6-413a-938e-b6887f2e77e5-lib-modules\") pod \"kube-proxy-ng44z\" (UID: \"5294c368-c4e6-413a-938e-b6887f2e77e5\") " pod="kube-system/kube-proxy-ng44z" Feb 9 09:47:37.505185 kubelet[2080]: I0209 09:47:37.504840 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccqqx\" (UniqueName: \"kubernetes.io/projected/5294c368-c4e6-413a-938e-b6887f2e77e5-kube-api-access-ccqqx\") pod \"kube-proxy-ng44z\" (UID: \"5294c368-c4e6-413a-938e-b6887f2e77e5\") " pod="kube-system/kube-proxy-ng44z" Feb 9 09:47:37.505185 kubelet[2080]: I0209 09:47:37.504887 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-hostproc\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.505185 kubelet[2080]: I0209 09:47:37.504932 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-lib-modules\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.505185 kubelet[2080]: I0209 09:47:37.504977 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-clustermesh-secrets\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.505185 kubelet[2080]: I0209 09:47:37.505022 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-hubble-tls\") pod \"cilium-6tnrm\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " pod="kube-system/cilium-6tnrm" Feb 9 09:47:37.505185 kubelet[2080]: I0209 09:47:37.505070 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5294c368-c4e6-413a-938e-b6887f2e77e5-kube-proxy\") pod \"kube-proxy-ng44z\" (UID: \"5294c368-c4e6-413a-938e-b6887f2e77e5\") " pod="kube-system/kube-proxy-ng44z" Feb 9 09:47:37.783005 env[1647]: time="2024-02-09T09:47:37.782856744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6tnrm,Uid:2014d963-9ed4-4a31-8855-de9cfcc2a7c5,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:37.795377 env[1647]: time="2024-02-09T09:47:37.794959800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ng44z,Uid:5294c368-c4e6-413a-938e-b6887f2e77e5,Namespace:kube-system,Attempt:0,}" Feb 9 09:47:38.332166 env[1647]: time="2024-02-09T09:47:38.332106527Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:38.334376 env[1647]: time="2024-02-09T09:47:38.334331387Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:38.339324 env[1647]: time="2024-02-09T09:47:38.339274487Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:38.341643 env[1647]: time="2024-02-09T09:47:38.341581679Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:38.346040 env[1647]: time="2024-02-09T09:47:38.345976643Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:38.347419 env[1647]: time="2024-02-09T09:47:38.347382983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:38.351505 env[1647]: time="2024-02-09T09:47:38.351447443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:38.355512 env[1647]: time="2024-02-09T09:47:38.355440263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:38.392152 env[1647]: time="2024-02-09T09:47:38.390679067Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:38.392152 env[1647]: time="2024-02-09T09:47:38.390809651Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:38.392152 env[1647]: time="2024-02-09T09:47:38.390869927Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:38.392152 env[1647]: time="2024-02-09T09:47:38.391239299Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010 pid=2138 runtime=io.containerd.runc.v2 Feb 9 09:47:38.393942 env[1647]: time="2024-02-09T09:47:38.393810767Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:47:38.393942 env[1647]: time="2024-02-09T09:47:38.393893327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:47:38.394251 env[1647]: time="2024-02-09T09:47:38.393920315Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:47:38.394371 env[1647]: time="2024-02-09T09:47:38.394294379Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/319fe8903c271a57e4721699a9bb1749a1ea9a55a09507b5575cf216d926fd19 pid=2142 runtime=io.containerd.runc.v2 Feb 9 09:47:38.426622 systemd[1]: Started cri-containerd-319fe8903c271a57e4721699a9bb1749a1ea9a55a09507b5575cf216d926fd19.scope. Feb 9 09:47:38.436750 systemd[1]: Started cri-containerd-b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010.scope. Feb 9 09:47:38.451224 kubelet[2080]: E0209 09:47:38.451166 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:38.510498 env[1647]: time="2024-02-09T09:47:38.510440580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6tnrm,Uid:2014d963-9ed4-4a31-8855-de9cfcc2a7c5,Namespace:kube-system,Attempt:0,} returns sandbox id \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\"" Feb 9 09:47:38.515270 env[1647]: time="2024-02-09T09:47:38.515176548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ng44z,Uid:5294c368-c4e6-413a-938e-b6887f2e77e5,Namespace:kube-system,Attempt:0,} returns sandbox id \"319fe8903c271a57e4721699a9bb1749a1ea9a55a09507b5575cf216d926fd19\"" Feb 9 09:47:38.519113 env[1647]: time="2024-02-09T09:47:38.519065712Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 09:47:38.619926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3702395853.mount: Deactivated successfully. Feb 9 09:47:39.452378 kubelet[2080]: E0209 09:47:39.452317 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:40.453428 kubelet[2080]: E0209 09:47:40.453370 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:41.454006 kubelet[2080]: E0209 09:47:41.453932 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:42.454153 kubelet[2080]: E0209 09:47:42.454089 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:43.454561 kubelet[2080]: E0209 09:47:43.454499 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:44.455800 kubelet[2080]: E0209 09:47:44.455685 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:45.071693 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2767687141.mount: Deactivated successfully. Feb 9 09:47:45.456480 kubelet[2080]: E0209 09:47:45.456420 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:46.457367 kubelet[2080]: E0209 09:47:46.457328 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:47.458473 kubelet[2080]: E0209 09:47:47.458396 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:48.458649 kubelet[2080]: E0209 09:47:48.458587 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:49.000404 env[1647]: time="2024-02-09T09:47:49.000343292Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:49.020320 env[1647]: time="2024-02-09T09:47:49.020247176Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:49.027112 env[1647]: time="2024-02-09T09:47:49.027058640Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:49.028229 env[1647]: time="2024-02-09T09:47:49.028160324Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Feb 9 09:47:49.030454 env[1647]: time="2024-02-09T09:47:49.029962148Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 09:47:49.035733 env[1647]: time="2024-02-09T09:47:49.035657000Z" level=info msg="CreateContainer within sandbox \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:47:49.060290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561901062.mount: Deactivated successfully. Feb 9 09:47:49.070740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount426312106.mount: Deactivated successfully. Feb 9 09:47:49.078704 env[1647]: time="2024-02-09T09:47:49.078623300Z" level=info msg="CreateContainer within sandbox \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848\"" Feb 9 09:47:49.080053 env[1647]: time="2024-02-09T09:47:49.079985372Z" level=info msg="StartContainer for \"83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848\"" Feb 9 09:47:49.117225 systemd[1]: Started cri-containerd-83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848.scope. Feb 9 09:47:49.186462 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 9 09:47:49.196053 env[1647]: time="2024-02-09T09:47:49.195996429Z" level=info msg="StartContainer for \"83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848\" returns successfully" Feb 9 09:47:49.200468 systemd[1]: cri-containerd-83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848.scope: Deactivated successfully. Feb 9 09:47:49.460130 kubelet[2080]: E0209 09:47:49.460036 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:50.057068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848-rootfs.mount: Deactivated successfully. Feb 9 09:47:50.460898 kubelet[2080]: E0209 09:47:50.460846 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:50.724249 env[1647]: time="2024-02-09T09:47:50.723966897Z" level=info msg="shim disconnected" id=83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848 Feb 9 09:47:50.725025 env[1647]: time="2024-02-09T09:47:50.724971877Z" level=warning msg="cleaning up after shim disconnected" id=83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848 namespace=k8s.io Feb 9 09:47:50.725189 env[1647]: time="2024-02-09T09:47:50.725160283Z" level=info msg="cleaning up dead shim" Feb 9 09:47:50.740254 env[1647]: time="2024-02-09T09:47:50.740197906Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2260 runtime=io.containerd.runc.v2\n" Feb 9 09:47:51.433172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330339506.mount: Deactivated successfully. Feb 9 09:47:51.461388 kubelet[2080]: E0209 09:47:51.461319 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:51.692433 env[1647]: time="2024-02-09T09:47:51.692097456Z" level=info msg="CreateContainer within sandbox \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:47:51.726914 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3037699383.mount: Deactivated successfully. Feb 9 09:47:51.733011 env[1647]: time="2024-02-09T09:47:51.732927463Z" level=info msg="CreateContainer within sandbox \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0\"" Feb 9 09:47:51.734540 env[1647]: time="2024-02-09T09:47:51.734493921Z" level=info msg="StartContainer for \"a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0\"" Feb 9 09:47:51.775092 systemd[1]: Started cri-containerd-a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0.scope. Feb 9 09:47:51.850186 env[1647]: time="2024-02-09T09:47:51.850119974Z" level=info msg="StartContainer for \"a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0\" returns successfully" Feb 9 09:47:51.870985 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 09:47:51.872112 systemd[1]: Stopped systemd-sysctl.service. Feb 9 09:47:51.872930 systemd[1]: Stopping systemd-sysctl.service... Feb 9 09:47:51.880109 systemd[1]: Starting systemd-sysctl.service... Feb 9 09:47:51.886440 systemd[1]: cri-containerd-a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0.scope: Deactivated successfully. Feb 9 09:47:51.901562 systemd[1]: Finished systemd-sysctl.service. Feb 9 09:47:52.145405 env[1647]: time="2024-02-09T09:47:52.145340335Z" level=info msg="shim disconnected" id=a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0 Feb 9 09:47:52.145728 env[1647]: time="2024-02-09T09:47:52.145689976Z" level=warning msg="cleaning up after shim disconnected" id=a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0 namespace=k8s.io Feb 9 09:47:52.145869 env[1647]: time="2024-02-09T09:47:52.145841201Z" level=info msg="cleaning up dead shim" Feb 9 09:47:52.160115 env[1647]: time="2024-02-09T09:47:52.160061014Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2326 runtime=io.containerd.runc.v2\n" Feb 9 09:47:52.369922 env[1647]: time="2024-02-09T09:47:52.369847709Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:52.374134 env[1647]: time="2024-02-09T09:47:52.374072194Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:52.377796 env[1647]: time="2024-02-09T09:47:52.377716707Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:52.381496 env[1647]: time="2024-02-09T09:47:52.381435135Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:47:52.382679 env[1647]: time="2024-02-09T09:47:52.382615391Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:2d8b4f784b5f439fa536676861ad1144130a981e5ac011d08829ed921477ec74\"" Feb 9 09:47:52.386719 env[1647]: time="2024-02-09T09:47:52.386629986Z" level=info msg="CreateContainer within sandbox \"319fe8903c271a57e4721699a9bb1749a1ea9a55a09507b5575cf216d926fd19\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 09:47:52.411487 env[1647]: time="2024-02-09T09:47:52.411315911Z" level=info msg="CreateContainer within sandbox \"319fe8903c271a57e4721699a9bb1749a1ea9a55a09507b5575cf216d926fd19\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a9b1d379f908446b9cf40577bd1bec13c73c501adc40104c23663bfadfbc7ed4\"" Feb 9 09:47:52.413098 env[1647]: time="2024-02-09T09:47:52.413048060Z" level=info msg="StartContainer for \"a9b1d379f908446b9cf40577bd1bec13c73c501adc40104c23663bfadfbc7ed4\"" Feb 9 09:47:52.455700 systemd[1]: run-containerd-runc-k8s.io-a9b1d379f908446b9cf40577bd1bec13c73c501adc40104c23663bfadfbc7ed4-runc.N5jtzk.mount: Deactivated successfully. Feb 9 09:47:52.458315 systemd[1]: Started cri-containerd-a9b1d379f908446b9cf40577bd1bec13c73c501adc40104c23663bfadfbc7ed4.scope. Feb 9 09:47:52.461641 kubelet[2080]: E0209 09:47:52.461563 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:52.539963 env[1647]: time="2024-02-09T09:47:52.539876578Z" level=info msg="StartContainer for \"a9b1d379f908446b9cf40577bd1bec13c73c501adc40104c23663bfadfbc7ed4\" returns successfully" Feb 9 09:47:52.702030 env[1647]: time="2024-02-09T09:47:52.701845323Z" level=info msg="CreateContainer within sandbox \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:47:52.727212 env[1647]: time="2024-02-09T09:47:52.727130643Z" level=info msg="CreateContainer within sandbox \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04\"" Feb 9 09:47:52.728224 env[1647]: time="2024-02-09T09:47:52.728168990Z" level=info msg="StartContainer for \"902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04\"" Feb 9 09:47:52.733587 kubelet[2080]: I0209 09:47:52.733543 2080 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ng44z" podStartSLOduration=3.868250645 podCreationTimestamp="2024-02-09 09:47:35 +0000 UTC" firstStartedPulling="2024-02-09 09:47:38.518745444 +0000 UTC m=+6.281158761" lastFinishedPulling="2024-02-09 09:47:52.383977481 +0000 UTC m=+20.146390798" observedRunningTime="2024-02-09 09:47:52.710004229 +0000 UTC m=+20.472417594" watchObservedRunningTime="2024-02-09 09:47:52.733482682 +0000 UTC m=+20.495896035" Feb 9 09:47:52.773709 systemd[1]: Started cri-containerd-902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04.scope. Feb 9 09:47:52.852334 env[1647]: time="2024-02-09T09:47:52.852250740Z" level=info msg="StartContainer for \"902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04\" returns successfully" Feb 9 09:47:52.859674 systemd[1]: cri-containerd-902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04.scope: Deactivated successfully. Feb 9 09:47:52.998539 env[1647]: time="2024-02-09T09:47:52.998376017Z" level=info msg="shim disconnected" id=902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04 Feb 9 09:47:52.998539 env[1647]: time="2024-02-09T09:47:52.998446144Z" level=warning msg="cleaning up after shim disconnected" id=902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04 namespace=k8s.io Feb 9 09:47:52.998539 env[1647]: time="2024-02-09T09:47:52.998468888Z" level=info msg="cleaning up dead shim" Feb 9 09:47:53.014585 env[1647]: time="2024-02-09T09:47:53.014515462Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2503 runtime=io.containerd.runc.v2\n" Feb 9 09:47:53.433283 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04-rootfs.mount: Deactivated successfully. Feb 9 09:47:53.447465 kubelet[2080]: E0209 09:47:53.447410 2080 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:53.462699 kubelet[2080]: E0209 09:47:53.462659 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:53.707859 env[1647]: time="2024-02-09T09:47:53.707369110Z" level=info msg="CreateContainer within sandbox \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:47:53.727636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2474304541.mount: Deactivated successfully. Feb 9 09:47:53.744164 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount325461484.mount: Deactivated successfully. Feb 9 09:47:53.752155 env[1647]: time="2024-02-09T09:47:53.752088869Z" level=info msg="CreateContainer within sandbox \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3\"" Feb 9 09:47:53.753517 env[1647]: time="2024-02-09T09:47:53.753462136Z" level=info msg="StartContainer for \"9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3\"" Feb 9 09:47:53.783965 systemd[1]: Started cri-containerd-9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3.scope. Feb 9 09:47:53.841056 systemd[1]: cri-containerd-9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3.scope: Deactivated successfully. Feb 9 09:47:53.844380 env[1647]: time="2024-02-09T09:47:53.844009853Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2014d963_9ed4_4a31_8855_de9cfcc2a7c5.slice/cri-containerd-9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3.scope/memory.events\": no such file or directory" Feb 9 09:47:53.847181 env[1647]: time="2024-02-09T09:47:53.847123228Z" level=info msg="StartContainer for \"9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3\" returns successfully" Feb 9 09:47:53.891583 env[1647]: time="2024-02-09T09:47:53.891521289Z" level=info msg="shim disconnected" id=9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3 Feb 9 09:47:53.892472 env[1647]: time="2024-02-09T09:47:53.892433627Z" level=warning msg="cleaning up after shim disconnected" id=9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3 namespace=k8s.io Feb 9 09:47:53.892600 env[1647]: time="2024-02-09T09:47:53.892571887Z" level=info msg="cleaning up dead shim" Feb 9 09:47:53.906226 env[1647]: time="2024-02-09T09:47:53.906168955Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:47:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2605 runtime=io.containerd.runc.v2\n" Feb 9 09:47:54.463729 kubelet[2080]: E0209 09:47:54.463678 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:54.714162 env[1647]: time="2024-02-09T09:47:54.713949766Z" level=info msg="CreateContainer within sandbox \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:47:54.739112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1685225797.mount: Deactivated successfully. Feb 9 09:47:54.751029 env[1647]: time="2024-02-09T09:47:54.750970290Z" level=info msg="CreateContainer within sandbox \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\"" Feb 9 09:47:54.752040 env[1647]: time="2024-02-09T09:47:54.751972234Z" level=info msg="StartContainer for \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\"" Feb 9 09:47:54.790309 systemd[1]: Started cri-containerd-85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367.scope. Feb 9 09:47:54.856689 env[1647]: time="2024-02-09T09:47:54.856622239Z" level=info msg="StartContainer for \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\" returns successfully" Feb 9 09:47:55.037812 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:47:55.077369 kubelet[2080]: I0209 09:47:55.076739 2080 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 09:47:55.465024 kubelet[2080]: E0209 09:47:55.464982 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:55.649804 kernel: Initializing XFRM netlink socket Feb 9 09:47:55.655810 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! Feb 9 09:47:55.744513 kubelet[2080]: I0209 09:47:55.744378 2080 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-6tnrm" podStartSLOduration=10.232695163 podCreationTimestamp="2024-02-09 09:47:35 +0000 UTC" firstStartedPulling="2024-02-09 09:47:38.517286712 +0000 UTC m=+6.279700041" lastFinishedPulling="2024-02-09 09:47:49.028890368 +0000 UTC m=+16.791303685" observedRunningTime="2024-02-09 09:47:55.743706779 +0000 UTC m=+23.506120120" watchObservedRunningTime="2024-02-09 09:47:55.744298807 +0000 UTC m=+23.506712136" Feb 9 09:47:56.466527 kubelet[2080]: E0209 09:47:56.466467 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:57.440474 systemd-networkd[1453]: cilium_host: Link UP Feb 9 09:47:57.441731 (udev-worker)[2484]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:57.441732 (udev-worker)[2482]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:57.452135 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 09:47:57.452244 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 09:47:57.445452 systemd-networkd[1453]: cilium_net: Link UP Feb 9 09:47:57.448263 systemd-networkd[1453]: cilium_net: Gained carrier Feb 9 09:47:57.450953 systemd-networkd[1453]: cilium_host: Gained carrier Feb 9 09:47:57.468440 kubelet[2080]: E0209 09:47:57.468092 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:57.606647 (udev-worker)[2752]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:57.616254 systemd-networkd[1453]: cilium_vxlan: Link UP Feb 9 09:47:57.616276 systemd-networkd[1453]: cilium_vxlan: Gained carrier Feb 9 09:47:58.075828 kernel: NET: Registered PF_ALG protocol family Feb 9 09:47:58.300981 systemd-networkd[1453]: cilium_host: Gained IPv6LL Feb 9 09:47:58.364943 systemd-networkd[1453]: cilium_net: Gained IPv6LL Feb 9 09:47:58.468526 kubelet[2080]: E0209 09:47:58.468480 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:59.336460 (udev-worker)[2753]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:47:59.349169 systemd-networkd[1453]: lxc_health: Link UP Feb 9 09:47:59.361977 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:47:59.362417 systemd-networkd[1453]: lxc_health: Gained carrier Feb 9 09:47:59.431817 kubelet[2080]: I0209 09:47:59.430729 2080 topology_manager.go:215] "Topology Admit Handler" podUID="645441bb-d8ad-4690-8ed3-c34b9e044d5e" podNamespace="default" podName="nginx-deployment-6d5f899847-hh7kp" Feb 9 09:47:59.441254 systemd[1]: Created slice kubepods-besteffort-pod645441bb_d8ad_4690_8ed3_c34b9e044d5e.slice. Feb 9 09:47:59.463007 kubelet[2080]: I0209 09:47:59.462965 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hr7k\" (UniqueName: \"kubernetes.io/projected/645441bb-d8ad-4690-8ed3-c34b9e044d5e-kube-api-access-5hr7k\") pod \"nginx-deployment-6d5f899847-hh7kp\" (UID: \"645441bb-d8ad-4690-8ed3-c34b9e044d5e\") " pod="default/nginx-deployment-6d5f899847-hh7kp" Feb 9 09:47:59.470122 kubelet[2080]: E0209 09:47:59.470076 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:47:59.581021 systemd-networkd[1453]: cilium_vxlan: Gained IPv6LL Feb 9 09:47:59.750821 env[1647]: time="2024-02-09T09:47:59.750725653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-hh7kp,Uid:645441bb-d8ad-4690-8ed3-c34b9e044d5e,Namespace:default,Attempt:0,}" Feb 9 09:47:59.823693 systemd-networkd[1453]: lxcb4842a23eca8: Link UP Feb 9 09:47:59.841862 kernel: eth0: renamed from tmp1d2e4 Feb 9 09:47:59.850603 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb4842a23eca8: link becomes ready Feb 9 09:47:59.856021 systemd-networkd[1453]: lxcb4842a23eca8: Gained carrier Feb 9 09:48:00.471151 kubelet[2080]: E0209 09:48:00.471054 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:00.733002 systemd-networkd[1453]: lxc_health: Gained IPv6LL Feb 9 09:48:00.750013 amazon-ssm-agent[1659]: 2024-02-09 09:48:00 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 9 09:48:01.471610 kubelet[2080]: E0209 09:48:01.471513 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:01.693012 systemd-networkd[1453]: lxcb4842a23eca8: Gained IPv6LL Feb 9 09:48:02.472051 kubelet[2080]: E0209 09:48:02.471979 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:03.472197 kubelet[2080]: E0209 09:48:03.472128 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:04.015427 update_engine[1641]: I0209 09:48:04.014847 1641 update_attempter.cc:509] Updating boot flags... Feb 9 09:48:04.473061 kubelet[2080]: E0209 09:48:04.473001 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:05.473378 kubelet[2080]: E0209 09:48:05.473311 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:06.473997 kubelet[2080]: E0209 09:48:06.473931 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:07.474750 kubelet[2080]: E0209 09:48:07.474686 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:08.101619 env[1647]: time="2024-02-09T09:48:08.101465464Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:08.101619 env[1647]: time="2024-02-09T09:48:08.101552749Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:08.102443 env[1647]: time="2024-02-09T09:48:08.102333875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:08.102907 env[1647]: time="2024-02-09T09:48:08.102821393Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d2e4b507dbe060bca480f9596b9b5e80c9c7869c25749c41d3235d39a3ffcd5 pid=3301 runtime=io.containerd.runc.v2 Feb 9 09:48:08.131394 systemd[1]: run-containerd-runc-k8s.io-1d2e4b507dbe060bca480f9596b9b5e80c9c7869c25749c41d3235d39a3ffcd5-runc.6TFfFw.mount: Deactivated successfully. Feb 9 09:48:08.141274 systemd[1]: Started cri-containerd-1d2e4b507dbe060bca480f9596b9b5e80c9c7869c25749c41d3235d39a3ffcd5.scope. Feb 9 09:48:08.206630 env[1647]: time="2024-02-09T09:48:08.206575160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-hh7kp,Uid:645441bb-d8ad-4690-8ed3-c34b9e044d5e,Namespace:default,Attempt:0,} returns sandbox id \"1d2e4b507dbe060bca480f9596b9b5e80c9c7869c25749c41d3235d39a3ffcd5\"" Feb 9 09:48:08.209665 env[1647]: time="2024-02-09T09:48:08.209615825Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 09:48:08.476020 kubelet[2080]: E0209 09:48:08.475278 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:09.476383 kubelet[2080]: E0209 09:48:09.476291 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:10.477442 kubelet[2080]: E0209 09:48:10.477375 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:11.477802 kubelet[2080]: E0209 09:48:11.477727 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:12.478520 kubelet[2080]: E0209 09:48:12.478452 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:12.523847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618416544.mount: Deactivated successfully. Feb 9 09:48:13.447915 kubelet[2080]: E0209 09:48:13.447853 2080 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:13.479393 kubelet[2080]: E0209 09:48:13.479330 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:14.019024 env[1647]: time="2024-02-09T09:48:14.018944392Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:14.022130 env[1647]: time="2024-02-09T09:48:14.022067457Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:14.025338 env[1647]: time="2024-02-09T09:48:14.025278128Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:14.028377 env[1647]: time="2024-02-09T09:48:14.028316443Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:14.029840 env[1647]: time="2024-02-09T09:48:14.029792493Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 09:48:14.035057 env[1647]: time="2024-02-09T09:48:14.034982185Z" level=info msg="CreateContainer within sandbox \"1d2e4b507dbe060bca480f9596b9b5e80c9c7869c25749c41d3235d39a3ffcd5\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 09:48:14.052539 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1237292033.mount: Deactivated successfully. Feb 9 09:48:14.064123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount756617155.mount: Deactivated successfully. Feb 9 09:48:14.071213 env[1647]: time="2024-02-09T09:48:14.071152128Z" level=info msg="CreateContainer within sandbox \"1d2e4b507dbe060bca480f9596b9b5e80c9c7869c25749c41d3235d39a3ffcd5\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7152481327fa5071e6c509b20bdaef1c6c1f021bc10bc94b9a88d60b77352a1c\"" Feb 9 09:48:14.072350 env[1647]: time="2024-02-09T09:48:14.072224751Z" level=info msg="StartContainer for \"7152481327fa5071e6c509b20bdaef1c6c1f021bc10bc94b9a88d60b77352a1c\"" Feb 9 09:48:14.108326 systemd[1]: Started cri-containerd-7152481327fa5071e6c509b20bdaef1c6c1f021bc10bc94b9a88d60b77352a1c.scope. Feb 9 09:48:14.174496 env[1647]: time="2024-02-09T09:48:14.174433443Z" level=info msg="StartContainer for \"7152481327fa5071e6c509b20bdaef1c6c1f021bc10bc94b9a88d60b77352a1c\" returns successfully" Feb 9 09:48:14.480119 kubelet[2080]: E0209 09:48:14.480064 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:14.776362 kubelet[2080]: I0209 09:48:14.775990 2080 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-hh7kp" podStartSLOduration=9.954531607 podCreationTimestamp="2024-02-09 09:47:59 +0000 UTC" firstStartedPulling="2024-02-09 09:48:08.208855068 +0000 UTC m=+35.971268385" lastFinishedPulling="2024-02-09 09:48:14.030264366 +0000 UTC m=+41.792677683" observedRunningTime="2024-02-09 09:48:14.775590303 +0000 UTC m=+42.538003620" watchObservedRunningTime="2024-02-09 09:48:14.775940905 +0000 UTC m=+42.538354222" Feb 9 09:48:15.481076 kubelet[2080]: E0209 09:48:15.481009 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:16.482008 kubelet[2080]: E0209 09:48:16.481966 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:17.483375 kubelet[2080]: E0209 09:48:17.483312 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:18.482996 kubelet[2080]: I0209 09:48:18.482939 2080 topology_manager.go:215] "Topology Admit Handler" podUID="4fd3880f-5f62-4770-ad98-618cc1eafa36" podNamespace="default" podName="nfs-server-provisioner-0" Feb 9 09:48:18.483561 kubelet[2080]: E0209 09:48:18.483516 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:18.493389 systemd[1]: Created slice kubepods-besteffort-pod4fd3880f_5f62_4770_ad98_618cc1eafa36.slice. Feb 9 09:48:18.589830 kubelet[2080]: I0209 09:48:18.589747 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qmfm\" (UniqueName: \"kubernetes.io/projected/4fd3880f-5f62-4770-ad98-618cc1eafa36-kube-api-access-8qmfm\") pod \"nfs-server-provisioner-0\" (UID: \"4fd3880f-5f62-4770-ad98-618cc1eafa36\") " pod="default/nfs-server-provisioner-0" Feb 9 09:48:18.590021 kubelet[2080]: I0209 09:48:18.589887 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/4fd3880f-5f62-4770-ad98-618cc1eafa36-data\") pod \"nfs-server-provisioner-0\" (UID: \"4fd3880f-5f62-4770-ad98-618cc1eafa36\") " pod="default/nfs-server-provisioner-0" Feb 9 09:48:18.801028 env[1647]: time="2024-02-09T09:48:18.800554109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4fd3880f-5f62-4770-ad98-618cc1eafa36,Namespace:default,Attempt:0,}" Feb 9 09:48:18.849174 systemd-networkd[1453]: lxc71129045a077: Link UP Feb 9 09:48:18.852824 kernel: eth0: renamed from tmp76442 Feb 9 09:48:18.857155 (udev-worker)[3394]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:18.863134 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:48:18.863276 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc71129045a077: link becomes ready Feb 9 09:48:18.863370 systemd-networkd[1453]: lxc71129045a077: Gained carrier Feb 9 09:48:19.283141 env[1647]: time="2024-02-09T09:48:19.282966499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:19.283475 env[1647]: time="2024-02-09T09:48:19.283083109Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:19.283475 env[1647]: time="2024-02-09T09:48:19.283441591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:19.284270 env[1647]: time="2024-02-09T09:48:19.284145236Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7644296bf264fea22aeb1df164227e2f82645336323e92dc26c8c5f60cc6d656 pid=3425 runtime=io.containerd.runc.v2 Feb 9 09:48:19.313629 systemd[1]: Started cri-containerd-7644296bf264fea22aeb1df164227e2f82645336323e92dc26c8c5f60cc6d656.scope. Feb 9 09:48:19.390653 env[1647]: time="2024-02-09T09:48:19.390575302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:4fd3880f-5f62-4770-ad98-618cc1eafa36,Namespace:default,Attempt:0,} returns sandbox id \"7644296bf264fea22aeb1df164227e2f82645336323e92dc26c8c5f60cc6d656\"" Feb 9 09:48:19.394245 env[1647]: time="2024-02-09T09:48:19.394195033Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 09:48:19.485038 kubelet[2080]: E0209 09:48:19.484978 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:20.486192 kubelet[2080]: E0209 09:48:20.486133 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:20.701220 systemd-networkd[1453]: lxc71129045a077: Gained IPv6LL Feb 9 09:48:21.487308 kubelet[2080]: E0209 09:48:21.487242 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:22.488408 kubelet[2080]: E0209 09:48:22.488344 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:22.554492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1926273971.mount: Deactivated successfully. Feb 9 09:48:23.488893 kubelet[2080]: E0209 09:48:23.488829 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:24.489337 kubelet[2080]: E0209 09:48:24.489260 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:25.490222 kubelet[2080]: E0209 09:48:25.490160 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:26.372146 env[1647]: time="2024-02-09T09:48:26.372047708Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:26.401249 env[1647]: time="2024-02-09T09:48:26.401179248Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:26.404547 env[1647]: time="2024-02-09T09:48:26.404484881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:26.407803 env[1647]: time="2024-02-09T09:48:26.407716000Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:26.409508 env[1647]: time="2024-02-09T09:48:26.409461844Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" Feb 9 09:48:26.414226 env[1647]: time="2024-02-09T09:48:26.414070646Z" level=info msg="CreateContainer within sandbox \"7644296bf264fea22aeb1df164227e2f82645336323e92dc26c8c5f60cc6d656\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 09:48:26.431577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3567202094.mount: Deactivated successfully. Feb 9 09:48:26.443958 env[1647]: time="2024-02-09T09:48:26.443896548Z" level=info msg="CreateContainer within sandbox \"7644296bf264fea22aeb1df164227e2f82645336323e92dc26c8c5f60cc6d656\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"1c96f94f2fefc8cc384131549108d2b80e8e6d6f7cf3bb4ec3e7e69b6ab56373\"" Feb 9 09:48:26.445232 env[1647]: time="2024-02-09T09:48:26.445187095Z" level=info msg="StartContainer for \"1c96f94f2fefc8cc384131549108d2b80e8e6d6f7cf3bb4ec3e7e69b6ab56373\"" Feb 9 09:48:26.491806 kubelet[2080]: E0209 09:48:26.491101 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:26.493128 systemd[1]: Started cri-containerd-1c96f94f2fefc8cc384131549108d2b80e8e6d6f7cf3bb4ec3e7e69b6ab56373.scope. Feb 9 09:48:26.554410 env[1647]: time="2024-02-09T09:48:26.554346425Z" level=info msg="StartContainer for \"1c96f94f2fefc8cc384131549108d2b80e8e6d6f7cf3bb4ec3e7e69b6ab56373\" returns successfully" Feb 9 09:48:27.426438 systemd[1]: run-containerd-runc-k8s.io-1c96f94f2fefc8cc384131549108d2b80e8e6d6f7cf3bb4ec3e7e69b6ab56373-runc.l95124.mount: Deactivated successfully. Feb 9 09:48:27.492099 kubelet[2080]: E0209 09:48:27.492060 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:28.493431 kubelet[2080]: E0209 09:48:28.493368 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:29.493590 kubelet[2080]: E0209 09:48:29.493521 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:30.494371 kubelet[2080]: E0209 09:48:30.494312 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:31.495180 kubelet[2080]: E0209 09:48:31.495122 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:32.495468 kubelet[2080]: E0209 09:48:32.495401 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:33.447794 kubelet[2080]: E0209 09:48:33.447732 2080 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:33.495963 kubelet[2080]: E0209 09:48:33.495858 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:34.497065 kubelet[2080]: E0209 09:48:34.497015 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:35.498183 kubelet[2080]: E0209 09:48:35.498107 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:36.278042 kubelet[2080]: I0209 09:48:36.277977 2080 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=11.261348702 podCreationTimestamp="2024-02-09 09:48:18 +0000 UTC" firstStartedPulling="2024-02-09 09:48:19.39336831 +0000 UTC m=+47.155781627" lastFinishedPulling="2024-02-09 09:48:26.409922383 +0000 UTC m=+54.172335700" observedRunningTime="2024-02-09 09:48:26.818424415 +0000 UTC m=+54.580837756" watchObservedRunningTime="2024-02-09 09:48:36.277902775 +0000 UTC m=+64.040316116" Feb 9 09:48:36.278540 kubelet[2080]: I0209 09:48:36.278465 2080 topology_manager.go:215] "Topology Admit Handler" podUID="2e222d05-a5fc-4509-a444-b9f3adadc8e4" podNamespace="default" podName="test-pod-1" Feb 9 09:48:36.288442 systemd[1]: Created slice kubepods-besteffort-pod2e222d05_a5fc_4509_a444_b9f3adadc8e4.slice. Feb 9 09:48:36.398926 kubelet[2080]: I0209 09:48:36.398889 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2f8lm\" (UniqueName: \"kubernetes.io/projected/2e222d05-a5fc-4509-a444-b9f3adadc8e4-kube-api-access-2f8lm\") pod \"test-pod-1\" (UID: \"2e222d05-a5fc-4509-a444-b9f3adadc8e4\") " pod="default/test-pod-1" Feb 9 09:48:36.399207 kubelet[2080]: I0209 09:48:36.399184 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-500a23dd-f4c4-4bbf-b02f-a2b915eb6bd1\" (UniqueName: \"kubernetes.io/nfs/2e222d05-a5fc-4509-a444-b9f3adadc8e4-pvc-500a23dd-f4c4-4bbf-b02f-a2b915eb6bd1\") pod \"test-pod-1\" (UID: \"2e222d05-a5fc-4509-a444-b9f3adadc8e4\") " pod="default/test-pod-1" Feb 9 09:48:36.499092 kubelet[2080]: E0209 09:48:36.499047 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:36.535994 kernel: FS-Cache: Loaded Feb 9 09:48:36.580912 kernel: RPC: Registered named UNIX socket transport module. Feb 9 09:48:36.581071 kernel: RPC: Registered udp transport module. Feb 9 09:48:36.581113 kernel: RPC: Registered tcp transport module. Feb 9 09:48:36.585007 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 09:48:36.639822 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 09:48:36.897377 kernel: NFS: Registering the id_resolver key type Feb 9 09:48:36.897554 kernel: Key type id_resolver registered Feb 9 09:48:36.899111 kernel: Key type id_legacy registered Feb 9 09:48:36.938428 nfsidmap[3547]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 9 09:48:36.944164 nfsidmap[3548]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'us-west-2.compute.internal' Feb 9 09:48:37.197460 env[1647]: time="2024-02-09T09:48:37.196914575Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2e222d05-a5fc-4509-a444-b9f3adadc8e4,Namespace:default,Attempt:0,}" Feb 9 09:48:37.248731 (udev-worker)[3538]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:37.249581 systemd-networkd[1453]: lxc863172e7a2dc: Link UP Feb 9 09:48:37.251055 (udev-worker)[3544]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:48:37.264808 kernel: eth0: renamed from tmp9b1e4 Feb 9 09:48:37.277491 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 09:48:37.277594 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc863172e7a2dc: link becomes ready Feb 9 09:48:37.277825 systemd-networkd[1453]: lxc863172e7a2dc: Gained carrier Feb 9 09:48:37.500173 kubelet[2080]: E0209 09:48:37.500024 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:37.724421 env[1647]: time="2024-02-09T09:48:37.724292062Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:37.724421 env[1647]: time="2024-02-09T09:48:37.724367790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:37.724865 env[1647]: time="2024-02-09T09:48:37.724394633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:37.725331 env[1647]: time="2024-02-09T09:48:37.725249628Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b1e437a1b0527c56be2b3e8a2dcd2264e02347dce8a7ab2bafba3bb51d78b49 pid=3574 runtime=io.containerd.runc.v2 Feb 9 09:48:37.763610 systemd[1]: run-containerd-runc-k8s.io-9b1e437a1b0527c56be2b3e8a2dcd2264e02347dce8a7ab2bafba3bb51d78b49-runc.utxzcH.mount: Deactivated successfully. Feb 9 09:48:37.769118 systemd[1]: Started cri-containerd-9b1e437a1b0527c56be2b3e8a2dcd2264e02347dce8a7ab2bafba3bb51d78b49.scope. Feb 9 09:48:37.838120 env[1647]: time="2024-02-09T09:48:37.838054084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:2e222d05-a5fc-4509-a444-b9f3adadc8e4,Namespace:default,Attempt:0,} returns sandbox id \"9b1e437a1b0527c56be2b3e8a2dcd2264e02347dce8a7ab2bafba3bb51d78b49\"" Feb 9 09:48:37.841094 env[1647]: time="2024-02-09T09:48:37.841042206Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 09:48:38.279740 env[1647]: time="2024-02-09T09:48:38.279680220Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:38.282846 env[1647]: time="2024-02-09T09:48:38.282763651Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:38.285761 env[1647]: time="2024-02-09T09:48:38.285702848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:38.289233 env[1647]: time="2024-02-09T09:48:38.289173381Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:38.290857 env[1647]: time="2024-02-09T09:48:38.290807396Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:01bfff6bfbc6f0e8a890bad9e22c5392e6dbfd67def93467db6231d4be1b719b\"" Feb 9 09:48:38.295218 env[1647]: time="2024-02-09T09:48:38.295162158Z" level=info msg="CreateContainer within sandbox \"9b1e437a1b0527c56be2b3e8a2dcd2264e02347dce8a7ab2bafba3bb51d78b49\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 09:48:38.322538 env[1647]: time="2024-02-09T09:48:38.322478882Z" level=info msg="CreateContainer within sandbox \"9b1e437a1b0527c56be2b3e8a2dcd2264e02347dce8a7ab2bafba3bb51d78b49\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e9a2da0736ac33b36c0ff11110bef705d59cbe4faad5693a9a8934a5a4308360\"" Feb 9 09:48:38.323906 env[1647]: time="2024-02-09T09:48:38.323859708Z" level=info msg="StartContainer for \"e9a2da0736ac33b36c0ff11110bef705d59cbe4faad5693a9a8934a5a4308360\"" Feb 9 09:48:38.362871 systemd[1]: Started cri-containerd-e9a2da0736ac33b36c0ff11110bef705d59cbe4faad5693a9a8934a5a4308360.scope. Feb 9 09:48:38.429156 env[1647]: time="2024-02-09T09:48:38.429075017Z" level=info msg="StartContainer for \"e9a2da0736ac33b36c0ff11110bef705d59cbe4faad5693a9a8934a5a4308360\" returns successfully" Feb 9 09:48:38.500481 kubelet[2080]: E0209 09:48:38.500397 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:38.519755 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1357390951.mount: Deactivated successfully. Feb 9 09:48:38.685064 systemd-networkd[1453]: lxc863172e7a2dc: Gained IPv6LL Feb 9 09:48:38.842881 kubelet[2080]: I0209 09:48:38.842829 2080 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.391605465 podCreationTimestamp="2024-02-09 09:48:19 +0000 UTC" firstStartedPulling="2024-02-09 09:48:37.840072748 +0000 UTC m=+65.602486077" lastFinishedPulling="2024-02-09 09:48:38.291211838 +0000 UTC m=+66.053625143" observedRunningTime="2024-02-09 09:48:38.841729772 +0000 UTC m=+66.604143101" watchObservedRunningTime="2024-02-09 09:48:38.842744531 +0000 UTC m=+66.605157860" Feb 9 09:48:39.500915 kubelet[2080]: E0209 09:48:39.500858 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:40.501939 kubelet[2080]: E0209 09:48:40.501868 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:41.502858 kubelet[2080]: E0209 09:48:41.502762 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:42.502986 kubelet[2080]: E0209 09:48:42.502945 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:43.503982 kubelet[2080]: E0209 09:48:43.503919 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:44.505121 kubelet[2080]: E0209 09:48:44.505078 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:45.506955 kubelet[2080]: E0209 09:48:45.506831 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:46.055636 systemd[1]: run-containerd-runc-k8s.io-85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367-runc.kg2lvg.mount: Deactivated successfully. Feb 9 09:48:46.089058 env[1647]: time="2024-02-09T09:48:46.088978376Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 09:48:46.100057 env[1647]: time="2024-02-09T09:48:46.099987567Z" level=info msg="StopContainer for \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\" with timeout 2 (s)" Feb 9 09:48:46.100659 env[1647]: time="2024-02-09T09:48:46.100616986Z" level=info msg="Stop container \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\" with signal terminated" Feb 9 09:48:46.112675 systemd-networkd[1453]: lxc_health: Link DOWN Feb 9 09:48:46.112697 systemd-networkd[1453]: lxc_health: Lost carrier Feb 9 09:48:46.144696 systemd[1]: cri-containerd-85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367.scope: Deactivated successfully. Feb 9 09:48:46.145301 systemd[1]: cri-containerd-85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367.scope: Consumed 14.273s CPU time. Feb 9 09:48:46.182784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367-rootfs.mount: Deactivated successfully. Feb 9 09:48:46.450448 env[1647]: time="2024-02-09T09:48:46.450371703Z" level=info msg="shim disconnected" id=85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367 Feb 9 09:48:46.450448 env[1647]: time="2024-02-09T09:48:46.450441073Z" level=warning msg="cleaning up after shim disconnected" id=85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367 namespace=k8s.io Feb 9 09:48:46.450816 env[1647]: time="2024-02-09T09:48:46.450463188Z" level=info msg="cleaning up dead shim" Feb 9 09:48:46.465233 env[1647]: time="2024-02-09T09:48:46.465169817Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3709 runtime=io.containerd.runc.v2\n" Feb 9 09:48:46.469065 env[1647]: time="2024-02-09T09:48:46.468990924Z" level=info msg="StopContainer for \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\" returns successfully" Feb 9 09:48:46.470034 env[1647]: time="2024-02-09T09:48:46.469966654Z" level=info msg="StopPodSandbox for \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\"" Feb 9 09:48:46.470229 env[1647]: time="2024-02-09T09:48:46.470072827Z" level=info msg="Container to stop \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:46.470229 env[1647]: time="2024-02-09T09:48:46.470105514Z" level=info msg="Container to stop \"83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:46.470229 env[1647]: time="2024-02-09T09:48:46.470132837Z" level=info msg="Container to stop \"a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:46.470229 env[1647]: time="2024-02-09T09:48:46.470160461Z" level=info msg="Container to stop \"902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:46.470229 env[1647]: time="2024-02-09T09:48:46.470188396Z" level=info msg="Container to stop \"9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 09:48:46.472997 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010-shm.mount: Deactivated successfully. Feb 9 09:48:46.485245 systemd[1]: cri-containerd-b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010.scope: Deactivated successfully. Feb 9 09:48:46.507686 kubelet[2080]: E0209 09:48:46.507635 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:46.531505 env[1647]: time="2024-02-09T09:48:46.531437804Z" level=info msg="shim disconnected" id=b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010 Feb 9 09:48:46.531827 env[1647]: time="2024-02-09T09:48:46.531792599Z" level=warning msg="cleaning up after shim disconnected" id=b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010 namespace=k8s.io Feb 9 09:48:46.531951 env[1647]: time="2024-02-09T09:48:46.531923791Z" level=info msg="cleaning up dead shim" Feb 9 09:48:46.546577 env[1647]: time="2024-02-09T09:48:46.546502204Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3742 runtime=io.containerd.runc.v2\n" Feb 9 09:48:46.547172 env[1647]: time="2024-02-09T09:48:46.547124195Z" level=info msg="TearDown network for sandbox \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\" successfully" Feb 9 09:48:46.547288 env[1647]: time="2024-02-09T09:48:46.547171366Z" level=info msg="StopPodSandbox for \"b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010\" returns successfully" Feb 9 09:48:46.660833 kubelet[2080]: I0209 09:48:46.660273 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cilium-cgroup\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.660833 kubelet[2080]: I0209 09:48:46.660452 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-bpf-maps\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.660833 kubelet[2080]: I0209 09:48:46.660371 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:46.660833 kubelet[2080]: I0209 09:48:46.660554 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-lib-modules\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.660833 kubelet[2080]: I0209 09:48:46.660720 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-clustermesh-secrets\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.660833 kubelet[2080]: I0209 09:48:46.660602 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:46.661371 kubelet[2080]: I0209 09:48:46.660632 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:46.661592 kubelet[2080]: I0209 09:48:46.660897 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cilium-config-path\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.661677 kubelet[2080]: I0209 09:48:46.661645 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cilium-run\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.661739 kubelet[2080]: I0209 09:48:46.661694 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-hostproc\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.661838 kubelet[2080]: I0209 09:48:46.661793 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-hubble-tls\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.661910 kubelet[2080]: I0209 09:48:46.661840 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-xtables-lock\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.661910 kubelet[2080]: I0209 09:48:46.661907 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-host-proc-sys-net\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.662031 kubelet[2080]: I0209 09:48:46.661972 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cni-path\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.662031 kubelet[2080]: I0209 09:48:46.662016 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-host-proc-sys-kernel\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.662157 kubelet[2080]: I0209 09:48:46.662087 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-etc-cni-netd\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.662223 kubelet[2080]: I0209 09:48:46.662159 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbbhq\" (UniqueName: \"kubernetes.io/projected/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-kube-api-access-lbbhq\") pod \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\" (UID: \"2014d963-9ed4-4a31-8855-de9cfcc2a7c5\") " Feb 9 09:48:46.662289 kubelet[2080]: I0209 09:48:46.662241 2080 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cilium-cgroup\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.662289 kubelet[2080]: I0209 09:48:46.662273 2080 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-bpf-maps\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.662403 kubelet[2080]: I0209 09:48:46.662321 2080 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-lib-modules\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.666034 kubelet[2080]: I0209 09:48:46.665958 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:46.666197 kubelet[2080]: I0209 09:48:46.666056 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-hostproc" (OuterVolumeSpecName: "hostproc") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:46.666541 kubelet[2080]: I0209 09:48:46.666491 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:46.666641 kubelet[2080]: I0209 09:48:46.666559 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:46.666641 kubelet[2080]: I0209 09:48:46.666606 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cni-path" (OuterVolumeSpecName: "cni-path") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:46.666801 kubelet[2080]: I0209 09:48:46.666646 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:46.666801 kubelet[2080]: I0209 09:48:46.666687 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:46.668875 kubelet[2080]: I0209 09:48:46.668825 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:48:46.670955 kubelet[2080]: I0209 09:48:46.670883 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:48:46.674098 kubelet[2080]: I0209 09:48:46.674024 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:46.675649 kubelet[2080]: I0209 09:48:46.675576 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-kube-api-access-lbbhq" (OuterVolumeSpecName: "kube-api-access-lbbhq") pod "2014d963-9ed4-4a31-8855-de9cfcc2a7c5" (UID: "2014d963-9ed4-4a31-8855-de9cfcc2a7c5"). InnerVolumeSpecName "kube-api-access-lbbhq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:46.763075 kubelet[2080]: I0209 09:48:46.762960 2080 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-etc-cni-netd\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.763075 kubelet[2080]: I0209 09:48:46.763009 2080 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lbbhq\" (UniqueName: \"kubernetes.io/projected/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-kube-api-access-lbbhq\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.763075 kubelet[2080]: I0209 09:48:46.763037 2080 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-clustermesh-secrets\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.763853 kubelet[2080]: I0209 09:48:46.763829 2080 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cilium-config-path\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.764016 kubelet[2080]: I0209 09:48:46.763996 2080 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cilium-run\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.764156 kubelet[2080]: I0209 09:48:46.764137 2080 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-hostproc\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.764296 kubelet[2080]: I0209 09:48:46.764278 2080 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-hubble-tls\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.764467 kubelet[2080]: I0209 09:48:46.764448 2080 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-xtables-lock\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.764607 kubelet[2080]: I0209 09:48:46.764588 2080 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-host-proc-sys-net\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.764752 kubelet[2080]: I0209 09:48:46.764733 2080 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-cni-path\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.764906 kubelet[2080]: I0209 09:48:46.764885 2080 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2014d963-9ed4-4a31-8855-de9cfcc2a7c5-host-proc-sys-kernel\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:46.852216 kubelet[2080]: I0209 09:48:46.852179 2080 scope.go:117] "RemoveContainer" containerID="85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367" Feb 9 09:48:46.855857 env[1647]: time="2024-02-09T09:48:46.855759352Z" level=info msg="RemoveContainer for \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\"" Feb 9 09:48:46.863848 env[1647]: time="2024-02-09T09:48:46.863732156Z" level=info msg="RemoveContainer for \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\" returns successfully" Feb 9 09:48:46.868109 systemd[1]: Removed slice kubepods-burstable-pod2014d963_9ed4_4a31_8855_de9cfcc2a7c5.slice. Feb 9 09:48:46.868325 systemd[1]: kubepods-burstable-pod2014d963_9ed4_4a31_8855_de9cfcc2a7c5.slice: Consumed 14.483s CPU time. Feb 9 09:48:46.875358 kubelet[2080]: I0209 09:48:46.875326 2080 scope.go:117] "RemoveContainer" containerID="9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3" Feb 9 09:48:46.878811 env[1647]: time="2024-02-09T09:48:46.878416898Z" level=info msg="RemoveContainer for \"9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3\"" Feb 9 09:48:46.883639 env[1647]: time="2024-02-09T09:48:46.883582281Z" level=info msg="RemoveContainer for \"9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3\" returns successfully" Feb 9 09:48:46.884269 kubelet[2080]: I0209 09:48:46.884233 2080 scope.go:117] "RemoveContainer" containerID="902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04" Feb 9 09:48:46.887058 env[1647]: time="2024-02-09T09:48:46.887003126Z" level=info msg="RemoveContainer for \"902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04\"" Feb 9 09:48:46.891617 env[1647]: time="2024-02-09T09:48:46.891539449Z" level=info msg="RemoveContainer for \"902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04\" returns successfully" Feb 9 09:48:46.892053 kubelet[2080]: I0209 09:48:46.892025 2080 scope.go:117] "RemoveContainer" containerID="a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0" Feb 9 09:48:46.894056 env[1647]: time="2024-02-09T09:48:46.893995484Z" level=info msg="RemoveContainer for \"a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0\"" Feb 9 09:48:46.898255 env[1647]: time="2024-02-09T09:48:46.898183121Z" level=info msg="RemoveContainer for \"a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0\" returns successfully" Feb 9 09:48:46.898585 kubelet[2080]: I0209 09:48:46.898539 2080 scope.go:117] "RemoveContainer" containerID="83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848" Feb 9 09:48:46.900309 env[1647]: time="2024-02-09T09:48:46.900252754Z" level=info msg="RemoveContainer for \"83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848\"" Feb 9 09:48:46.904606 env[1647]: time="2024-02-09T09:48:46.904531564Z" level=info msg="RemoveContainer for \"83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848\" returns successfully" Feb 9 09:48:46.904920 kubelet[2080]: I0209 09:48:46.904871 2080 scope.go:117] "RemoveContainer" containerID="85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367" Feb 9 09:48:46.905388 env[1647]: time="2024-02-09T09:48:46.905256825Z" level=error msg="ContainerStatus for \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\": not found" Feb 9 09:48:46.905632 kubelet[2080]: E0209 09:48:46.905600 2080 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\": not found" containerID="85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367" Feb 9 09:48:46.905794 kubelet[2080]: I0209 09:48:46.905751 2080 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367"} err="failed to get container status \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\": rpc error: code = NotFound desc = an error occurred when try to find container \"85e629b5adc076103fad787eee5ade0ca43672a13fca1210ee3dc78753cec367\": not found" Feb 9 09:48:46.905896 kubelet[2080]: I0209 09:48:46.905813 2080 scope.go:117] "RemoveContainer" containerID="9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3" Feb 9 09:48:46.906333 env[1647]: time="2024-02-09T09:48:46.906178676Z" level=error msg="ContainerStatus for \"9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3\": not found" Feb 9 09:48:46.906464 kubelet[2080]: E0209 09:48:46.906427 2080 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3\": not found" containerID="9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3" Feb 9 09:48:46.906543 kubelet[2080]: I0209 09:48:46.906484 2080 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3"} err="failed to get container status \"9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3\": rpc error: code = NotFound desc = an error occurred when try to find container \"9ae1ff1f94a630c12845b6f633ed16033112c994ec17fa8ca8b5d979b55a30f3\": not found" Feb 9 09:48:46.906543 kubelet[2080]: I0209 09:48:46.906507 2080 scope.go:117] "RemoveContainer" containerID="902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04" Feb 9 09:48:46.907087 env[1647]: time="2024-02-09T09:48:46.906984455Z" level=error msg="ContainerStatus for \"902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04\": not found" Feb 9 09:48:46.907325 kubelet[2080]: E0209 09:48:46.907292 2080 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04\": not found" containerID="902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04" Feb 9 09:48:46.907431 kubelet[2080]: I0209 09:48:46.907362 2080 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04"} err="failed to get container status \"902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04\": rpc error: code = NotFound desc = an error occurred when try to find container \"902c7d4090ec0d74e3f9c8a000d2aac6beba78a963c922806dc96f0c6187ca04\": not found" Feb 9 09:48:46.907431 kubelet[2080]: I0209 09:48:46.907388 2080 scope.go:117] "RemoveContainer" containerID="a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0" Feb 9 09:48:46.907942 env[1647]: time="2024-02-09T09:48:46.907793209Z" level=error msg="ContainerStatus for \"a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0\": not found" Feb 9 09:48:46.908053 kubelet[2080]: E0209 09:48:46.908032 2080 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0\": not found" containerID="a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0" Feb 9 09:48:46.908136 kubelet[2080]: I0209 09:48:46.908077 2080 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0"} err="failed to get container status \"a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0\": rpc error: code = NotFound desc = an error occurred when try to find container \"a079e0c7aa5afb82a3b3395052b0a323404c1b5a380d36eab75dc581d650c3a0\": not found" Feb 9 09:48:46.908136 kubelet[2080]: I0209 09:48:46.908099 2080 scope.go:117] "RemoveContainer" containerID="83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848" Feb 9 09:48:46.908452 env[1647]: time="2024-02-09T09:48:46.908372878Z" level=error msg="ContainerStatus for \"83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848\": not found" Feb 9 09:48:46.908674 kubelet[2080]: E0209 09:48:46.908640 2080 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848\": not found" containerID="83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848" Feb 9 09:48:46.908787 kubelet[2080]: I0209 09:48:46.908695 2080 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848"} err="failed to get container status \"83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848\": rpc error: code = NotFound desc = an error occurred when try to find container \"83e5775a0e4aaba65696360bed7e1957f8f418a766fe542f4455ef5a2cef8848\": not found" Feb 9 09:48:47.047913 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b819c8d61a9d158b67e6750247683c0898971857fde7f2f54cd508d0a2795010-rootfs.mount: Deactivated successfully. Feb 9 09:48:47.048083 systemd[1]: var-lib-kubelet-pods-2014d963\x2d9ed4\x2d4a31\x2d8855\x2dde9cfcc2a7c5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlbbhq.mount: Deactivated successfully. Feb 9 09:48:47.048229 systemd[1]: var-lib-kubelet-pods-2014d963\x2d9ed4\x2d4a31\x2d8855\x2dde9cfcc2a7c5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 09:48:47.048361 systemd[1]: var-lib-kubelet-pods-2014d963\x2d9ed4\x2d4a31\x2d8855\x2dde9cfcc2a7c5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:48:47.508339 kubelet[2080]: E0209 09:48:47.508277 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:47.634127 kubelet[2080]: I0209 09:48:47.634070 2080 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="2014d963-9ed4-4a31-8855-de9cfcc2a7c5" path="/var/lib/kubelet/pods/2014d963-9ed4-4a31-8855-de9cfcc2a7c5/volumes" Feb 9 09:48:48.508876 kubelet[2080]: E0209 09:48:48.508817 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:48.599318 kubelet[2080]: E0209 09:48:48.599271 2080 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:48:49.509899 kubelet[2080]: E0209 09:48:49.509832 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:50.123636 kubelet[2080]: I0209 09:48:50.123579 2080 topology_manager.go:215] "Topology Admit Handler" podUID="459fb3ec-777b-4c9b-a9b3-0c9b49bd7e9a" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-9tgt6" Feb 9 09:48:50.123822 kubelet[2080]: E0209 09:48:50.123654 2080 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2014d963-9ed4-4a31-8855-de9cfcc2a7c5" containerName="mount-cgroup" Feb 9 09:48:50.123822 kubelet[2080]: E0209 09:48:50.123677 2080 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2014d963-9ed4-4a31-8855-de9cfcc2a7c5" containerName="apply-sysctl-overwrites" Feb 9 09:48:50.123822 kubelet[2080]: E0209 09:48:50.123695 2080 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2014d963-9ed4-4a31-8855-de9cfcc2a7c5" containerName="mount-bpf-fs" Feb 9 09:48:50.123822 kubelet[2080]: E0209 09:48:50.123712 2080 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2014d963-9ed4-4a31-8855-de9cfcc2a7c5" containerName="clean-cilium-state" Feb 9 09:48:50.123822 kubelet[2080]: E0209 09:48:50.123730 2080 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="2014d963-9ed4-4a31-8855-de9cfcc2a7c5" containerName="cilium-agent" Feb 9 09:48:50.123822 kubelet[2080]: I0209 09:48:50.123788 2080 memory_manager.go:346] "RemoveStaleState removing state" podUID="2014d963-9ed4-4a31-8855-de9cfcc2a7c5" containerName="cilium-agent" Feb 9 09:48:50.133001 systemd[1]: Created slice kubepods-besteffort-pod459fb3ec_777b_4c9b_a9b3_0c9b49bd7e9a.slice. Feb 9 09:48:50.148686 kubelet[2080]: W0209 09:48:50.148651 2080 reflector.go:535] object-"kube-system"/"cilium-config": failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.16.31" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.16.31' and this object Feb 9 09:48:50.148960 kubelet[2080]: E0209 09:48:50.148936 2080 reflector.go:147] object-"kube-system"/"cilium-config": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "cilium-config" is forbidden: User "system:node:172.31.16.31" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172.31.16.31' and this object Feb 9 09:48:50.151870 kubelet[2080]: I0209 09:48:50.151833 2080 topology_manager.go:215] "Topology Admit Handler" podUID="3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" podNamespace="kube-system" podName="cilium-ztsfm" Feb 9 09:48:50.162739 systemd[1]: Created slice kubepods-burstable-pod3e913d0f_8e0d_44c4_b5ff_7f70a7f2b191.slice. Feb 9 09:48:50.182646 kubelet[2080]: W0209 09:48:50.182611 2080 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.16.31" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.16.31' and this object Feb 9 09:48:50.182912 kubelet[2080]: E0209 09:48:50.182888 2080 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:172.31.16.31" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.16.31' and this object Feb 9 09:48:50.183036 kubelet[2080]: W0209 09:48:50.182656 2080 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.16.31" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.16.31' and this object Feb 9 09:48:50.183183 kubelet[2080]: E0209 09:48:50.183162 2080 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:172.31.16.31" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.16.31' and this object Feb 9 09:48:50.183314 kubelet[2080]: W0209 09:48:50.182815 2080 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.16.31" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.16.31' and this object Feb 9 09:48:50.183428 kubelet[2080]: E0209 09:48:50.183408 2080 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:172.31.16.31" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '172.31.16.31' and this object Feb 9 09:48:50.187670 kubelet[2080]: I0209 09:48:50.187640 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/459fb3ec-777b-4c9b-a9b3-0c9b49bd7e9a-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-9tgt6\" (UID: \"459fb3ec-777b-4c9b-a9b3-0c9b49bd7e9a\") " pod="kube-system/cilium-operator-6bc8ccdb58-9tgt6" Feb 9 09:48:50.187951 kubelet[2080]: I0209 09:48:50.187905 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwjjz\" (UniqueName: \"kubernetes.io/projected/459fb3ec-777b-4c9b-a9b3-0c9b49bd7e9a-kube-api-access-mwjjz\") pod \"cilium-operator-6bc8ccdb58-9tgt6\" (UID: \"459fb3ec-777b-4c9b-a9b3-0c9b49bd7e9a\") " pod="kube-system/cilium-operator-6bc8ccdb58-9tgt6" Feb 9 09:48:50.288345 kubelet[2080]: I0209 09:48:50.288309 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-bpf-maps\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.288647 kubelet[2080]: I0209 09:48:50.288573 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-lib-modules\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.288863 kubelet[2080]: I0209 09:48:50.288843 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-ipsec-secrets\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.289129 kubelet[2080]: I0209 09:48:50.289109 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-host-proc-sys-net\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.289906 kubelet[2080]: I0209 09:48:50.289825 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-etc-cni-netd\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.290204 kubelet[2080]: I0209 09:48:50.290184 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-clustermesh-secrets\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.290456 kubelet[2080]: I0209 09:48:50.290435 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-run\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.290639 kubelet[2080]: I0209 09:48:50.290607 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-hostproc\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.290921 kubelet[2080]: I0209 09:48:50.290848 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-cgroup\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.291105 kubelet[2080]: I0209 09:48:50.291073 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cni-path\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.291339 kubelet[2080]: I0209 09:48:50.291319 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjtbz\" (UniqueName: \"kubernetes.io/projected/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-kube-api-access-sjtbz\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.291581 kubelet[2080]: I0209 09:48:50.291548 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-config-path\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.291866 kubelet[2080]: I0209 09:48:50.291834 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-host-proc-sys-kernel\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.292091 kubelet[2080]: I0209 09:48:50.292018 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-hubble-tls\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.292458 kubelet[2080]: I0209 09:48:50.292272 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-xtables-lock\") pod \"cilium-ztsfm\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " pod="kube-system/cilium-ztsfm" Feb 9 09:48:50.510727 kubelet[2080]: E0209 09:48:50.510616 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:50.729748 kubelet[2080]: E0209 09:48:50.729685 2080 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[cilium-config-path cilium-ipsec-secrets clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-ztsfm" podUID="3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" Feb 9 09:48:51.003146 kubelet[2080]: I0209 09:48:51.003107 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-host-proc-sys-kernel\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:51.003389 kubelet[2080]: I0209 09:48:51.003366 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-bpf-maps\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:51.003543 kubelet[2080]: I0209 09:48:51.003522 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-etc-cni-netd\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:51.003730 kubelet[2080]: I0209 09:48:51.003708 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjtbz\" (UniqueName: \"kubernetes.io/projected/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-kube-api-access-sjtbz\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:51.003906 kubelet[2080]: I0209 09:48:51.003885 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-xtables-lock\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:51.004048 kubelet[2080]: I0209 09:48:51.004027 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cni-path\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:51.004193 kubelet[2080]: I0209 09:48:51.004172 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-host-proc-sys-net\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:51.004338 kubelet[2080]: I0209 09:48:51.004314 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-run\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:51.004486 kubelet[2080]: I0209 09:48:51.004465 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-hostproc\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:51.004625 kubelet[2080]: I0209 09:48:51.004604 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-cgroup\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:51.004793 kubelet[2080]: I0209 09:48:51.004749 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-lib-modules\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:51.005067 kubelet[2080]: I0209 09:48:51.005019 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:51.005211 kubelet[2080]: I0209 09:48:51.003229 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:51.005330 kubelet[2080]: I0209 09:48:51.003443 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:51.005443 kubelet[2080]: I0209 09:48:51.003580 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:51.005590 kubelet[2080]: I0209 09:48:51.005560 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:51.005741 kubelet[2080]: I0209 09:48:51.005713 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cni-path" (OuterVolumeSpecName: "cni-path") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:51.005943 kubelet[2080]: I0209 09:48:51.005913 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:51.006142 kubelet[2080]: I0209 09:48:51.006111 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:51.006293 kubelet[2080]: I0209 09:48:51.006267 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-hostproc" (OuterVolumeSpecName: "hostproc") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:51.006461 kubelet[2080]: I0209 09:48:51.006434 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 09:48:51.010399 systemd[1]: var-lib-kubelet-pods-3e913d0f\x2d8e0d\x2d44c4\x2db5ff\x2d7f70a7f2b191-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjtbz.mount: Deactivated successfully. Feb 9 09:48:51.011925 kubelet[2080]: I0209 09:48:51.011859 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-kube-api-access-sjtbz" (OuterVolumeSpecName: "kube-api-access-sjtbz") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "kube-api-access-sjtbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 09:48:51.106159 kubelet[2080]: I0209 09:48:51.106110 2080 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-etc-cni-netd\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:51.106270 kubelet[2080]: I0209 09:48:51.106164 2080 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-sjtbz\" (UniqueName: \"kubernetes.io/projected/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-kube-api-access-sjtbz\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:51.106270 kubelet[2080]: I0209 09:48:51.106191 2080 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-xtables-lock\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:51.106270 kubelet[2080]: I0209 09:48:51.106218 2080 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cni-path\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:51.106270 kubelet[2080]: I0209 09:48:51.106242 2080 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-cgroup\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:51.106270 kubelet[2080]: I0209 09:48:51.106265 2080 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-lib-modules\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:51.106587 kubelet[2080]: I0209 09:48:51.106289 2080 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-host-proc-sys-net\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:51.106587 kubelet[2080]: I0209 09:48:51.106313 2080 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-run\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:51.106587 kubelet[2080]: I0209 09:48:51.106337 2080 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-hostproc\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:51.106587 kubelet[2080]: I0209 09:48:51.106359 2080 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-bpf-maps\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:51.106587 kubelet[2080]: I0209 09:48:51.106385 2080 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-host-proc-sys-kernel\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:51.293192 kubelet[2080]: E0209 09:48:51.293081 2080 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 9 09:48:51.293885 kubelet[2080]: E0209 09:48:51.293843 2080 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/459fb3ec-777b-4c9b-a9b3-0c9b49bd7e9a-cilium-config-path podName:459fb3ec-777b-4c9b-a9b3-0c9b49bd7e9a nodeName:}" failed. No retries permitted until 2024-02-09 09:48:51.793156008 +0000 UTC m=+79.555569313 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/459fb3ec-777b-4c9b-a9b3-0c9b49bd7e9a-cilium-config-path") pod "cilium-operator-6bc8ccdb58-9tgt6" (UID: "459fb3ec-777b-4c9b-a9b3-0c9b49bd7e9a") : failed to sync configmap cache: timed out waiting for the condition Feb 9 09:48:51.400595 kubelet[2080]: E0209 09:48:51.400564 2080 configmap.go:199] Couldn't get configMap kube-system/cilium-config: failed to sync configmap cache: timed out waiting for the condition Feb 9 09:48:51.400883 kubelet[2080]: E0209 09:48:51.400843 2080 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 9 09:48:51.400974 kubelet[2080]: E0209 09:48:51.400887 2080 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-ztsfm: failed to sync secret cache: timed out waiting for the condition Feb 9 09:48:51.400974 kubelet[2080]: E0209 09:48:51.400851 2080 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-config-path podName:3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191 nodeName:}" failed. No retries permitted until 2024-02-09 09:48:51.900825736 +0000 UTC m=+79.663239053 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-config-path") pod "cilium-ztsfm" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191") : failed to sync configmap cache: timed out waiting for the condition Feb 9 09:48:51.401152 kubelet[2080]: E0209 09:48:51.400976 2080 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-hubble-tls podName:3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191 nodeName:}" failed. No retries permitted until 2024-02-09 09:48:51.900957254 +0000 UTC m=+79.663370583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-hubble-tls") pod "cilium-ztsfm" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191") : failed to sync secret cache: timed out waiting for the condition Feb 9 09:48:51.408655 kubelet[2080]: I0209 09:48:51.408621 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-clustermesh-secrets\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:51.415423 systemd[1]: var-lib-kubelet-pods-3e913d0f\x2d8e0d\x2d44c4\x2db5ff\x2d7f70a7f2b191-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 09:48:51.417782 kubelet[2080]: I0209 09:48:51.417697 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:48:51.509200 kubelet[2080]: I0209 09:48:51.509166 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-ipsec-secrets\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:51.509891 kubelet[2080]: I0209 09:48:51.509846 2080 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-clustermesh-secrets\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:51.512139 kubelet[2080]: E0209 09:48:51.512093 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:51.517368 systemd[1]: var-lib-kubelet-pods-3e913d0f\x2d8e0d\x2d44c4\x2db5ff\x2d7f70a7f2b191-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 09:48:51.519147 kubelet[2080]: I0209 09:48:51.519078 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 09:48:51.610973 kubelet[2080]: I0209 09:48:51.610920 2080 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-ipsec-secrets\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:51.640868 systemd[1]: Removed slice kubepods-burstable-pod3e913d0f_8e0d_44c4_b5ff_7f70a7f2b191.slice. Feb 9 09:48:51.917474 kubelet[2080]: E0209 09:48:51.913411 2080 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: object "kube-system"/"hubble-server-certs" not registered Feb 9 09:48:51.917474 kubelet[2080]: E0209 09:48:51.913475 2080 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-ztsfm: object "kube-system"/"hubble-server-certs" not registered Feb 9 09:48:51.917474 kubelet[2080]: E0209 09:48:51.913896 2080 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-hubble-tls podName:3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191 nodeName:}" failed. No retries permitted until 2024-02-09 09:48:52.913854754 +0000 UTC m=+80.676268083 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-hubble-tls") pod "cilium-ztsfm" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191") : object "kube-system"/"hubble-server-certs" not registered Feb 9 09:48:51.917474 kubelet[2080]: I0209 09:48:51.914966 2080 topology_manager.go:215] "Topology Admit Handler" podUID="f11f4a34-a193-4cea-9eea-968f55c2a82c" podNamespace="kube-system" podName="cilium-grdtf" Feb 9 09:48:51.925602 systemd[1]: Created slice kubepods-burstable-podf11f4a34_a193_4cea_9eea_968f55c2a82c.slice. Feb 9 09:48:51.939442 env[1647]: time="2024-02-09T09:48:51.938761451Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-9tgt6,Uid:459fb3ec-777b-4c9b-a9b3-0c9b49bd7e9a,Namespace:kube-system,Attempt:0,}" Feb 9 09:48:51.981417 env[1647]: time="2024-02-09T09:48:51.981289689Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:51.981614 env[1647]: time="2024-02-09T09:48:51.981438642Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:51.981614 env[1647]: time="2024-02-09T09:48:51.981501676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:51.981957 env[1647]: time="2024-02-09T09:48:51.981874773Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0201ed653af6d7872d297b9e5203799daa175763642439566a2be0eb3e506595 pid=3773 runtime=io.containerd.runc.v2 Feb 9 09:48:52.015818 kubelet[2080]: I0209 09:48:52.015546 2080 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-config-path\") pod \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\" (UID: \"3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191\") " Feb 9 09:48:52.015818 kubelet[2080]: I0209 09:48:52.015662 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c6mrq\" (UniqueName: \"kubernetes.io/projected/f11f4a34-a193-4cea-9eea-968f55c2a82c-kube-api-access-c6mrq\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.015818 kubelet[2080]: I0209 09:48:52.015712 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f11f4a34-a193-4cea-9eea-968f55c2a82c-cilium-run\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.017836 kubelet[2080]: I0209 09:48:52.015755 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f11f4a34-a193-4cea-9eea-968f55c2a82c-cilium-config-path\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.017836 kubelet[2080]: I0209 09:48:52.016119 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f11f4a34-a193-4cea-9eea-968f55c2a82c-hubble-tls\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.017836 kubelet[2080]: I0209 09:48:52.016172 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f11f4a34-a193-4cea-9eea-968f55c2a82c-hostproc\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.017836 kubelet[2080]: I0209 09:48:52.016223 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f11f4a34-a193-4cea-9eea-968f55c2a82c-etc-cni-netd\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.017836 kubelet[2080]: I0209 09:48:52.016268 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f11f4a34-a193-4cea-9eea-968f55c2a82c-cilium-cgroup\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.017836 kubelet[2080]: I0209 09:48:52.016309 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f11f4a34-a193-4cea-9eea-968f55c2a82c-cni-path\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.018263 kubelet[2080]: I0209 09:48:52.016351 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f11f4a34-a193-4cea-9eea-968f55c2a82c-cilium-ipsec-secrets\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.018263 kubelet[2080]: I0209 09:48:52.016403 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f11f4a34-a193-4cea-9eea-968f55c2a82c-host-proc-sys-kernel\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.018263 kubelet[2080]: I0209 09:48:52.016445 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f11f4a34-a193-4cea-9eea-968f55c2a82c-bpf-maps\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.018263 kubelet[2080]: I0209 09:48:52.016488 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f11f4a34-a193-4cea-9eea-968f55c2a82c-lib-modules\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.018263 kubelet[2080]: I0209 09:48:52.016530 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f11f4a34-a193-4cea-9eea-968f55c2a82c-xtables-lock\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.018263 kubelet[2080]: I0209 09:48:52.016571 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f11f4a34-a193-4cea-9eea-968f55c2a82c-clustermesh-secrets\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.018585 kubelet[2080]: I0209 09:48:52.016615 2080 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f11f4a34-a193-4cea-9eea-968f55c2a82c-host-proc-sys-net\") pod \"cilium-grdtf\" (UID: \"f11f4a34-a193-4cea-9eea-968f55c2a82c\") " pod="kube-system/cilium-grdtf" Feb 9 09:48:52.018585 kubelet[2080]: I0209 09:48:52.016651 2080 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-hubble-tls\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:52.018585 kubelet[2080]: I0209 09:48:52.018306 2080 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" (UID: "3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 09:48:52.019259 systemd[1]: Started cri-containerd-0201ed653af6d7872d297b9e5203799daa175763642439566a2be0eb3e506595.scope. Feb 9 09:48:52.090833 env[1647]: time="2024-02-09T09:48:52.090712067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-9tgt6,Uid:459fb3ec-777b-4c9b-a9b3-0c9b49bd7e9a,Namespace:kube-system,Attempt:0,} returns sandbox id \"0201ed653af6d7872d297b9e5203799daa175763642439566a2be0eb3e506595\"" Feb 9 09:48:52.094551 env[1647]: time="2024-02-09T09:48:52.094500710Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 09:48:52.119320 kubelet[2080]: I0209 09:48:52.118087 2080 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191-cilium-config-path\") on node \"172.31.16.31\" DevicePath \"\"" Feb 9 09:48:52.240288 env[1647]: time="2024-02-09T09:48:52.239520086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-grdtf,Uid:f11f4a34-a193-4cea-9eea-968f55c2a82c,Namespace:kube-system,Attempt:0,}" Feb 9 09:48:52.261122 env[1647]: time="2024-02-09T09:48:52.260981267Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 09:48:52.261353 env[1647]: time="2024-02-09T09:48:52.261076473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 09:48:52.261353 env[1647]: time="2024-02-09T09:48:52.261104612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 09:48:52.262405 env[1647]: time="2024-02-09T09:48:52.262308323Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4cd27219b8fa386dde680ecfe3dcac954cdec28e3c89f01f628811439eae24ee pid=3820 runtime=io.containerd.runc.v2 Feb 9 09:48:52.283908 systemd[1]: Started cri-containerd-4cd27219b8fa386dde680ecfe3dcac954cdec28e3c89f01f628811439eae24ee.scope. Feb 9 09:48:52.343832 env[1647]: time="2024-02-09T09:48:52.343746593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-grdtf,Uid:f11f4a34-a193-4cea-9eea-968f55c2a82c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4cd27219b8fa386dde680ecfe3dcac954cdec28e3c89f01f628811439eae24ee\"" Feb 9 09:48:52.348701 env[1647]: time="2024-02-09T09:48:52.348641197Z" level=info msg="CreateContainer within sandbox \"4cd27219b8fa386dde680ecfe3dcac954cdec28e3c89f01f628811439eae24ee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 09:48:52.370074 env[1647]: time="2024-02-09T09:48:52.370008048Z" level=info msg="CreateContainer within sandbox \"4cd27219b8fa386dde680ecfe3dcac954cdec28e3c89f01f628811439eae24ee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"385ebffb01716a6051f30ef25425266d0d81edc80c5d855698f62edccbfc2430\"" Feb 9 09:48:52.371127 env[1647]: time="2024-02-09T09:48:52.371082640Z" level=info msg="StartContainer for \"385ebffb01716a6051f30ef25425266d0d81edc80c5d855698f62edccbfc2430\"" Feb 9 09:48:52.399021 systemd[1]: Started cri-containerd-385ebffb01716a6051f30ef25425266d0d81edc80c5d855698f62edccbfc2430.scope. Feb 9 09:48:52.431630 systemd[1]: run-containerd-runc-k8s.io-0201ed653af6d7872d297b9e5203799daa175763642439566a2be0eb3e506595-runc.hchU5l.mount: Deactivated successfully. Feb 9 09:48:52.485107 env[1647]: time="2024-02-09T09:48:52.485043036Z" level=info msg="StartContainer for \"385ebffb01716a6051f30ef25425266d0d81edc80c5d855698f62edccbfc2430\" returns successfully" Feb 9 09:48:52.499617 systemd[1]: cri-containerd-385ebffb01716a6051f30ef25425266d0d81edc80c5d855698f62edccbfc2430.scope: Deactivated successfully. Feb 9 09:48:52.513109 kubelet[2080]: E0209 09:48:52.513023 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:52.534894 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-385ebffb01716a6051f30ef25425266d0d81edc80c5d855698f62edccbfc2430-rootfs.mount: Deactivated successfully. Feb 9 09:48:52.551018 env[1647]: time="2024-02-09T09:48:52.550956215Z" level=info msg="shim disconnected" id=385ebffb01716a6051f30ef25425266d0d81edc80c5d855698f62edccbfc2430 Feb 9 09:48:52.551376 env[1647]: time="2024-02-09T09:48:52.551330680Z" level=warning msg="cleaning up after shim disconnected" id=385ebffb01716a6051f30ef25425266d0d81edc80c5d855698f62edccbfc2430 namespace=k8s.io Feb 9 09:48:52.551500 env[1647]: time="2024-02-09T09:48:52.551472397Z" level=info msg="cleaning up dead shim" Feb 9 09:48:52.566653 env[1647]: time="2024-02-09T09:48:52.566597249Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3902 runtime=io.containerd.runc.v2\n" Feb 9 09:48:52.872667 env[1647]: time="2024-02-09T09:48:52.872611251Z" level=info msg="CreateContainer within sandbox \"4cd27219b8fa386dde680ecfe3dcac954cdec28e3c89f01f628811439eae24ee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 09:48:52.892514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3788678527.mount: Deactivated successfully. Feb 9 09:48:52.903269 env[1647]: time="2024-02-09T09:48:52.903161968Z" level=info msg="CreateContainer within sandbox \"4cd27219b8fa386dde680ecfe3dcac954cdec28e3c89f01f628811439eae24ee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"78e362107dddf6448cacb894808cc7495ceb391bfb859ab66d6279d62387521e\"" Feb 9 09:48:52.904462 env[1647]: time="2024-02-09T09:48:52.904397946Z" level=info msg="StartContainer for \"78e362107dddf6448cacb894808cc7495ceb391bfb859ab66d6279d62387521e\"" Feb 9 09:48:52.937748 systemd[1]: Started cri-containerd-78e362107dddf6448cacb894808cc7495ceb391bfb859ab66d6279d62387521e.scope. Feb 9 09:48:53.003205 env[1647]: time="2024-02-09T09:48:53.003121211Z" level=info msg="StartContainer for \"78e362107dddf6448cacb894808cc7495ceb391bfb859ab66d6279d62387521e\" returns successfully" Feb 9 09:48:53.010387 systemd[1]: cri-containerd-78e362107dddf6448cacb894808cc7495ceb391bfb859ab66d6279d62387521e.scope: Deactivated successfully. Feb 9 09:48:53.059082 env[1647]: time="2024-02-09T09:48:53.059007710Z" level=info msg="shim disconnected" id=78e362107dddf6448cacb894808cc7495ceb391bfb859ab66d6279d62387521e Feb 9 09:48:53.059082 env[1647]: time="2024-02-09T09:48:53.059076361Z" level=warning msg="cleaning up after shim disconnected" id=78e362107dddf6448cacb894808cc7495ceb391bfb859ab66d6279d62387521e namespace=k8s.io Feb 9 09:48:53.059380 env[1647]: time="2024-02-09T09:48:53.059100804Z" level=info msg="cleaning up dead shim" Feb 9 09:48:53.075206 env[1647]: time="2024-02-09T09:48:53.075151937Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3966 runtime=io.containerd.runc.v2\n" Feb 9 09:48:53.447911 kubelet[2080]: E0209 09:48:53.447844 2080 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:53.513301 kubelet[2080]: E0209 09:48:53.513226 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:53.600785 kubelet[2080]: E0209 09:48:53.600673 2080 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 09:48:53.634412 kubelet[2080]: I0209 09:48:53.634374 2080 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191" path="/var/lib/kubelet/pods/3e913d0f-8e0d-44c4-b5ff-7f70a7f2b191/volumes" Feb 9 09:48:53.877211 env[1647]: time="2024-02-09T09:48:53.877152359Z" level=info msg="CreateContainer within sandbox \"4cd27219b8fa386dde680ecfe3dcac954cdec28e3c89f01f628811439eae24ee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 09:48:53.904289 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2594658166.mount: Deactivated successfully. Feb 9 09:48:53.916799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount781476963.mount: Deactivated successfully. Feb 9 09:48:53.926360 env[1647]: time="2024-02-09T09:48:53.926268380Z" level=info msg="CreateContainer within sandbox \"4cd27219b8fa386dde680ecfe3dcac954cdec28e3c89f01f628811439eae24ee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"09bbd9a610a27575df1bcc127fa258d467edf925a54aed001cc065eecf619da3\"" Feb 9 09:48:53.927812 env[1647]: time="2024-02-09T09:48:53.927739627Z" level=info msg="StartContainer for \"09bbd9a610a27575df1bcc127fa258d467edf925a54aed001cc065eecf619da3\"" Feb 9 09:48:53.971698 systemd[1]: Started cri-containerd-09bbd9a610a27575df1bcc127fa258d467edf925a54aed001cc065eecf619da3.scope. Feb 9 09:48:54.052661 systemd[1]: cri-containerd-09bbd9a610a27575df1bcc127fa258d467edf925a54aed001cc065eecf619da3.scope: Deactivated successfully. Feb 9 09:48:54.059558 env[1647]: time="2024-02-09T09:48:54.059345939Z" level=info msg="StartContainer for \"09bbd9a610a27575df1bcc127fa258d467edf925a54aed001cc065eecf619da3\" returns successfully" Feb 9 09:48:54.169586 systemd-timesyncd[1600]: Contacted time server 152.70.159.102:123 (0.flatcar.pool.ntp.org). Feb 9 09:48:54.210563 env[1647]: time="2024-02-09T09:48:54.210501242Z" level=info msg="shim disconnected" id=09bbd9a610a27575df1bcc127fa258d467edf925a54aed001cc065eecf619da3 Feb 9 09:48:54.210967 env[1647]: time="2024-02-09T09:48:54.210936940Z" level=warning msg="cleaning up after shim disconnected" id=09bbd9a610a27575df1bcc127fa258d467edf925a54aed001cc065eecf619da3 namespace=k8s.io Feb 9 09:48:54.211112 env[1647]: time="2024-02-09T09:48:54.211084440Z" level=info msg="cleaning up dead shim" Feb 9 09:48:54.225508 env[1647]: time="2024-02-09T09:48:54.225441639Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4025 runtime=io.containerd.runc.v2\n" Feb 9 09:48:54.375965 env[1647]: time="2024-02-09T09:48:54.375888863Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:54.379164 env[1647]: time="2024-02-09T09:48:54.379103481Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:54.384062 env[1647]: time="2024-02-09T09:48:54.384014199Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 09:48:54.386910 env[1647]: time="2024-02-09T09:48:54.385699823Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Feb 9 09:48:54.390868 env[1647]: time="2024-02-09T09:48:54.390811165Z" level=info msg="CreateContainer within sandbox \"0201ed653af6d7872d297b9e5203799daa175763642439566a2be0eb3e506595\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 09:48:54.410146 env[1647]: time="2024-02-09T09:48:54.410083437Z" level=info msg="CreateContainer within sandbox \"0201ed653af6d7872d297b9e5203799daa175763642439566a2be0eb3e506595\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cad4628c295f0c82831eb76ee9fababb7195d032e599af52b1a1a752bc9c4c00\"" Feb 9 09:48:54.411581 env[1647]: time="2024-02-09T09:48:54.411523091Z" level=info msg="StartContainer for \"cad4628c295f0c82831eb76ee9fababb7195d032e599af52b1a1a752bc9c4c00\"" Feb 9 09:48:54.457271 systemd[1]: run-containerd-runc-k8s.io-cad4628c295f0c82831eb76ee9fababb7195d032e599af52b1a1a752bc9c4c00-runc.f9YCmi.mount: Deactivated successfully. Feb 9 09:48:54.461390 systemd[1]: Started cri-containerd-cad4628c295f0c82831eb76ee9fababb7195d032e599af52b1a1a752bc9c4c00.scope. Feb 9 09:48:54.515106 kubelet[2080]: E0209 09:48:54.515027 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:54.523982 env[1647]: time="2024-02-09T09:48:54.523894517Z" level=info msg="StartContainer for \"cad4628c295f0c82831eb76ee9fababb7195d032e599af52b1a1a752bc9c4c00\" returns successfully" Feb 9 09:48:54.890985 env[1647]: time="2024-02-09T09:48:54.890911822Z" level=info msg="CreateContainer within sandbox \"4cd27219b8fa386dde680ecfe3dcac954cdec28e3c89f01f628811439eae24ee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 09:48:54.918479 env[1647]: time="2024-02-09T09:48:54.918417379Z" level=info msg="CreateContainer within sandbox \"4cd27219b8fa386dde680ecfe3dcac954cdec28e3c89f01f628811439eae24ee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a71b1c1c40f70043e0c0483f09f7be6e8379f6a8b9e17c28d4c07f243f2d0111\"" Feb 9 09:48:54.920002 env[1647]: time="2024-02-09T09:48:54.919955791Z" level=info msg="StartContainer for \"a71b1c1c40f70043e0c0483f09f7be6e8379f6a8b9e17c28d4c07f243f2d0111\"" Feb 9 09:48:54.961006 systemd[1]: Started cri-containerd-a71b1c1c40f70043e0c0483f09f7be6e8379f6a8b9e17c28d4c07f243f2d0111.scope. Feb 9 09:48:55.014820 systemd[1]: cri-containerd-a71b1c1c40f70043e0c0483f09f7be6e8379f6a8b9e17c28d4c07f243f2d0111.scope: Deactivated successfully. Feb 9 09:48:55.018980 env[1647]: time="2024-02-09T09:48:55.018842182Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf11f4a34_a193_4cea_9eea_968f55c2a82c.slice/cri-containerd-a71b1c1c40f70043e0c0483f09f7be6e8379f6a8b9e17c28d4c07f243f2d0111.scope/memory.events\": no such file or directory" Feb 9 09:48:55.022085 env[1647]: time="2024-02-09T09:48:55.022026423Z" level=info msg="StartContainer for \"a71b1c1c40f70043e0c0483f09f7be6e8379f6a8b9e17c28d4c07f243f2d0111\" returns successfully" Feb 9 09:48:55.089387 env[1647]: time="2024-02-09T09:48:55.089314141Z" level=info msg="shim disconnected" id=a71b1c1c40f70043e0c0483f09f7be6e8379f6a8b9e17c28d4c07f243f2d0111 Feb 9 09:48:55.090066 env[1647]: time="2024-02-09T09:48:55.089383796Z" level=warning msg="cleaning up after shim disconnected" id=a71b1c1c40f70043e0c0483f09f7be6e8379f6a8b9e17c28d4c07f243f2d0111 namespace=k8s.io Feb 9 09:48:55.090066 env[1647]: time="2024-02-09T09:48:55.089407254Z" level=info msg="cleaning up dead shim" Feb 9 09:48:55.104240 env[1647]: time="2024-02-09T09:48:55.104177266Z" level=warning msg="cleanup warnings time=\"2024-02-09T09:48:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4123 runtime=io.containerd.runc.v2\n" Feb 9 09:48:55.415937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount430243487.mount: Deactivated successfully. Feb 9 09:48:55.515371 kubelet[2080]: E0209 09:48:55.515314 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:55.906982 env[1647]: time="2024-02-09T09:48:55.906909992Z" level=info msg="CreateContainer within sandbox \"4cd27219b8fa386dde680ecfe3dcac954cdec28e3c89f01f628811439eae24ee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 09:48:55.925408 kubelet[2080]: I0209 09:48:55.925198 2080 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-9tgt6" podStartSLOduration=3.631316949 podCreationTimestamp="2024-02-09 09:48:50 +0000 UTC" firstStartedPulling="2024-02-09 09:48:52.09354974 +0000 UTC m=+79.855963057" lastFinishedPulling="2024-02-09 09:48:54.387372297 +0000 UTC m=+82.149785626" observedRunningTime="2024-02-09 09:48:54.926485283 +0000 UTC m=+82.688898660" watchObservedRunningTime="2024-02-09 09:48:55.925139518 +0000 UTC m=+83.687552859" Feb 9 09:48:55.934470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3186076314.mount: Deactivated successfully. Feb 9 09:48:55.949321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount491017805.mount: Deactivated successfully. Feb 9 09:48:55.953133 env[1647]: time="2024-02-09T09:48:55.953073544Z" level=info msg="CreateContainer within sandbox \"4cd27219b8fa386dde680ecfe3dcac954cdec28e3c89f01f628811439eae24ee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"aa2697e927ca9c344e3de19ad8301ebe827af9834b410bdcc31f0928be84caac\"" Feb 9 09:48:55.954539 env[1647]: time="2024-02-09T09:48:55.954483882Z" level=info msg="StartContainer for \"aa2697e927ca9c344e3de19ad8301ebe827af9834b410bdcc31f0928be84caac\"" Feb 9 09:48:55.983441 systemd[1]: Started cri-containerd-aa2697e927ca9c344e3de19ad8301ebe827af9834b410bdcc31f0928be84caac.scope. Feb 9 09:48:56.067908 env[1647]: time="2024-02-09T09:48:56.067842449Z" level=info msg="StartContainer for \"aa2697e927ca9c344e3de19ad8301ebe827af9834b410bdcc31f0928be84caac\" returns successfully" Feb 9 09:48:56.171106 kubelet[2080]: I0209 09:48:56.167702 2080 setters.go:552] "Node became not ready" node="172.31.16.31" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T09:48:56Z","lastTransitionTime":"2024-02-09T09:48:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 09:48:56.515876 kubelet[2080]: E0209 09:48:56.515712 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:56.776816 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) Feb 9 09:48:57.504445 systemd[1]: run-containerd-runc-k8s.io-aa2697e927ca9c344e3de19ad8301ebe827af9834b410bdcc31f0928be84caac-runc.0HNH9n.mount: Deactivated successfully. Feb 9 09:48:57.516521 kubelet[2080]: E0209 09:48:57.516451 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:58.517205 kubelet[2080]: E0209 09:48:58.517136 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:59.518125 kubelet[2080]: E0209 09:48:59.518078 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:48:59.757564 systemd[1]: run-containerd-runc-k8s.io-aa2697e927ca9c344e3de19ad8301ebe827af9834b410bdcc31f0928be84caac-runc.6LOxMD.mount: Deactivated successfully. Feb 9 09:49:00.519695 kubelet[2080]: E0209 09:49:00.519578 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:00.548048 systemd-networkd[1453]: lxc_health: Link UP Feb 9 09:49:00.552214 (udev-worker)[4695]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:49:00.554457 (udev-worker)[4694]: Network interface NamePolicy= disabled on kernel command line. Feb 9 09:49:00.586162 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 09:49:00.585752 systemd-networkd[1453]: lxc_health: Gained carrier Feb 9 09:49:01.520016 kubelet[2080]: E0209 09:49:01.519949 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:01.789488 systemd-networkd[1453]: lxc_health: Gained IPv6LL Feb 9 09:49:02.123810 systemd[1]: run-containerd-runc-k8s.io-aa2697e927ca9c344e3de19ad8301ebe827af9834b410bdcc31f0928be84caac-runc.PHI2oy.mount: Deactivated successfully. Feb 9 09:49:02.274466 kubelet[2080]: I0209 09:49:02.274376 2080 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-grdtf" podStartSLOduration=11.274323239 podCreationTimestamp="2024-02-09 09:48:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 09:48:56.943660159 +0000 UTC m=+84.706073524" watchObservedRunningTime="2024-02-09 09:49:02.274323239 +0000 UTC m=+90.036736568" Feb 9 09:49:02.520678 kubelet[2080]: E0209 09:49:02.520532 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:03.521701 kubelet[2080]: E0209 09:49:03.521634 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:04.491665 systemd[1]: run-containerd-runc-k8s.io-aa2697e927ca9c344e3de19ad8301ebe827af9834b410bdcc31f0928be84caac-runc.DPBY2w.mount: Deactivated successfully. Feb 9 09:49:04.522681 kubelet[2080]: E0209 09:49:04.522605 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:05.523558 kubelet[2080]: E0209 09:49:05.523491 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:06.524306 kubelet[2080]: E0209 09:49:06.524212 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:06.771554 systemd[1]: run-containerd-runc-k8s.io-aa2697e927ca9c344e3de19ad8301ebe827af9834b410bdcc31f0928be84caac-runc.shAQ1R.mount: Deactivated successfully. Feb 9 09:49:07.525971 kubelet[2080]: E0209 09:49:07.525870 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:08.526419 kubelet[2080]: E0209 09:49:08.526346 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 09:49:09.526723 kubelet[2080]: E0209 09:49:09.526670 2080 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"