Jun 25 14:15:48.074921 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jun 25 14:15:48.074958 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT Tue Jun 25 13:19:44 -00 2024 Jun 25 14:15:48.074981 kernel: efi: EFI v2.70 by EDK II Jun 25 14:15:48.074996 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x78553e18 Jun 25 14:15:48.075009 kernel: ACPI: Early table checksum verification disabled Jun 25 14:15:48.075022 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jun 25 14:15:48.075038 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jun 25 14:15:48.075052 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 25 14:15:48.075065 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jun 25 14:15:48.075079 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 25 14:15:48.075096 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jun 25 14:15:48.075110 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jun 25 14:15:48.075124 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jun 25 14:15:48.075138 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 25 14:15:48.075153 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jun 25 14:15:48.075172 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jun 25 14:15:48.075187 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jun 25 14:15:48.075201 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jun 25 14:15:48.075215 kernel: printk: bootconsole [uart0] enabled Jun 25 14:15:48.075230 kernel: NUMA: Failed to initialise from firmware Jun 25 14:15:48.075244 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jun 25 14:15:48.075259 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jun 25 14:15:48.075273 kernel: Zone ranges: Jun 25 14:15:48.075287 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jun 25 14:15:48.075301 kernel: DMA32 empty Jun 25 14:15:48.075315 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jun 25 14:15:48.075333 kernel: Movable zone start for each node Jun 25 14:15:48.075348 kernel: Early memory node ranges Jun 25 14:15:48.075362 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jun 25 14:15:48.075376 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jun 25 14:15:48.075390 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jun 25 14:15:48.075404 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jun 25 14:15:48.075418 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jun 25 14:15:48.075432 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jun 25 14:15:48.075446 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jun 25 14:15:48.075460 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jun 25 14:15:48.075474 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jun 25 14:15:48.075488 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jun 25 14:15:48.075507 kernel: psci: probing for conduit method from ACPI. Jun 25 14:15:48.075521 kernel: psci: PSCIv1.0 detected in firmware. Jun 25 14:15:48.075542 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 14:15:48.075558 kernel: psci: Trusted OS migration not required Jun 25 14:15:48.075572 kernel: psci: SMC Calling Convention v1.1 Jun 25 14:15:48.075591 kernel: percpu: Embedded 30 pages/cpu s83880 r8192 d30808 u122880 Jun 25 14:15:48.075607 kernel: pcpu-alloc: s83880 r8192 d30808 u122880 alloc=30*4096 Jun 25 14:15:48.075622 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 25 14:15:48.075637 kernel: Detected PIPT I-cache on CPU0 Jun 25 14:15:48.075652 kernel: CPU features: detected: GIC system register CPU interface Jun 25 14:15:48.075710 kernel: CPU features: detected: Spectre-v2 Jun 25 14:15:48.075728 kernel: CPU features: detected: Spectre-v3a Jun 25 14:15:48.075744 kernel: CPU features: detected: Spectre-BHB Jun 25 14:15:48.075759 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 14:15:48.075774 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 14:15:48.075789 kernel: CPU features: detected: ARM erratum 1742098 Jun 25 14:15:48.075804 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jun 25 14:15:48.075825 kernel: alternatives: applying boot alternatives Jun 25 14:15:48.075840 kernel: Fallback order for Node 0: 0 Jun 25 14:15:48.075855 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jun 25 14:15:48.075870 kernel: Policy zone: Normal Jun 25 14:15:48.075887 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:15:48.075903 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 14:15:48.075918 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 14:15:48.075933 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 14:15:48.075948 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 14:15:48.075963 kernel: software IO TLB: area num 2. Jun 25 14:15:48.075982 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jun 25 14:15:48.075999 kernel: Memory: 3825596K/4030464K available (9984K kernel code, 2108K rwdata, 7720K rodata, 34688K init, 894K bss, 204868K reserved, 0K cma-reserved) Jun 25 14:15:48.076014 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 14:15:48.076030 kernel: trace event string verifier disabled Jun 25 14:15:48.076065 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 14:15:48.076084 kernel: rcu: RCU event tracing is enabled. Jun 25 14:15:48.076099 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 14:15:48.076114 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 14:15:48.076130 kernel: Tracing variant of Tasks RCU enabled. Jun 25 14:15:48.076145 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 14:15:48.076160 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 14:15:48.076181 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 14:15:48.076196 kernel: GICv3: 96 SPIs implemented Jun 25 14:15:48.076211 kernel: GICv3: 0 Extended SPIs implemented Jun 25 14:15:48.076226 kernel: Root IRQ handler: gic_handle_irq Jun 25 14:15:48.076241 kernel: GICv3: GICv3 features: 16 PPIs Jun 25 14:15:48.076255 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jun 25 14:15:48.076270 kernel: ITS [mem 0x10080000-0x1009ffff] Jun 25 14:15:48.076286 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Jun 25 14:15:48.076301 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Jun 25 14:15:48.076316 kernel: GICv3: using LPI property table @0x00000004000c0000 Jun 25 14:15:48.076331 kernel: ITS: Using hypervisor restricted LPI range [128] Jun 25 14:15:48.076346 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Jun 25 14:15:48.076365 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 14:15:48.076380 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jun 25 14:15:48.076395 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jun 25 14:15:48.076410 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jun 25 14:15:48.076425 kernel: Console: colour dummy device 80x25 Jun 25 14:15:48.076441 kernel: printk: console [tty1] enabled Jun 25 14:15:48.076456 kernel: ACPI: Core revision 20220331 Jun 25 14:15:48.076472 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jun 25 14:15:48.076487 kernel: pid_max: default: 32768 minimum: 301 Jun 25 14:15:48.076502 kernel: LSM: Security Framework initializing Jun 25 14:15:48.076521 kernel: SELinux: Initializing. Jun 25 14:15:48.076537 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:15:48.076552 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:15:48.076567 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:15:48.076583 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 14:15:48.076598 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:15:48.076613 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 14:15:48.076628 kernel: rcu: Hierarchical SRCU implementation. Jun 25 14:15:48.076643 kernel: rcu: Max phase no-delay instances is 400. Jun 25 14:15:48.076680 kernel: Platform MSI: ITS@0x10080000 domain created Jun 25 14:15:48.076699 kernel: PCI/MSI: ITS@0x10080000 domain created Jun 25 14:15:48.076714 kernel: Remapping and enabling EFI services. Jun 25 14:15:48.076730 kernel: smp: Bringing up secondary CPUs ... Jun 25 14:15:48.076770 kernel: Detected PIPT I-cache on CPU1 Jun 25 14:15:48.076786 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jun 25 14:15:48.076802 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Jun 25 14:15:48.076818 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jun 25 14:15:48.076834 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 14:15:48.076855 kernel: SMP: Total of 2 processors activated. Jun 25 14:15:48.076871 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 14:15:48.076898 kernel: CPU features: detected: 32-bit EL1 Support Jun 25 14:15:48.076919 kernel: CPU features: detected: CRC32 instructions Jun 25 14:15:48.076935 kernel: CPU: All CPU(s) started at EL1 Jun 25 14:15:48.076951 kernel: alternatives: applying system-wide alternatives Jun 25 14:15:48.076967 kernel: devtmpfs: initialized Jun 25 14:15:48.076984 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 14:15:48.077005 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 14:15:48.077021 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 14:15:48.077037 kernel: SMBIOS 3.0.0 present. Jun 25 14:15:48.077053 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jun 25 14:15:48.077069 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 14:15:48.077085 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 14:15:48.077102 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 14:15:48.077118 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 14:15:48.077134 kernel: audit: initializing netlink subsys (disabled) Jun 25 14:15:48.077154 kernel: audit: type=2000 audit(0.246:1): state=initialized audit_enabled=0 res=1 Jun 25 14:15:48.077170 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 14:15:48.077186 kernel: cpuidle: using governor menu Jun 25 14:15:48.077202 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 14:15:48.077217 kernel: ASID allocator initialised with 32768 entries Jun 25 14:15:48.077233 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 14:15:48.077249 kernel: Serial: AMBA PL011 UART driver Jun 25 14:15:48.077265 kernel: KASLR enabled Jun 25 14:15:48.077281 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 14:15:48.077301 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 14:15:48.077317 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 14:15:48.077333 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 14:15:48.077349 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 14:15:48.077366 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 14:15:48.077382 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 14:15:48.077417 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 14:15:48.077434 kernel: ACPI: Added _OSI(Module Device) Jun 25 14:15:48.077450 kernel: ACPI: Added _OSI(Processor Device) Jun 25 14:15:48.077472 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 14:15:48.077488 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 14:15:48.077504 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 14:15:48.077520 kernel: ACPI: Interpreter enabled Jun 25 14:15:48.077536 kernel: ACPI: Using GIC for interrupt routing Jun 25 14:15:48.077552 kernel: ACPI: MCFG table detected, 1 entries Jun 25 14:15:48.077568 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jun 25 14:15:48.077870 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 14:15:48.078070 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 25 14:15:48.078259 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 25 14:15:48.078446 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jun 25 14:15:48.078633 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jun 25 14:15:48.078655 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jun 25 14:15:48.078692 kernel: acpiphp: Slot [1] registered Jun 25 14:15:48.078708 kernel: acpiphp: Slot [2] registered Jun 25 14:15:48.078724 kernel: acpiphp: Slot [3] registered Jun 25 14:15:48.078746 kernel: acpiphp: Slot [4] registered Jun 25 14:15:48.078762 kernel: acpiphp: Slot [5] registered Jun 25 14:15:48.078778 kernel: acpiphp: Slot [6] registered Jun 25 14:15:48.078794 kernel: acpiphp: Slot [7] registered Jun 25 14:15:48.078810 kernel: acpiphp: Slot [8] registered Jun 25 14:15:48.078873 kernel: acpiphp: Slot [9] registered Jun 25 14:15:48.079012 kernel: acpiphp: Slot [10] registered Jun 25 14:15:48.079032 kernel: acpiphp: Slot [11] registered Jun 25 14:15:48.079048 kernel: acpiphp: Slot [12] registered Jun 25 14:15:48.079064 kernel: acpiphp: Slot [13] registered Jun 25 14:15:48.079086 kernel: acpiphp: Slot [14] registered Jun 25 14:15:48.079102 kernel: acpiphp: Slot [15] registered Jun 25 14:15:48.079118 kernel: acpiphp: Slot [16] registered Jun 25 14:15:48.079133 kernel: acpiphp: Slot [17] registered Jun 25 14:15:48.079149 kernel: acpiphp: Slot [18] registered Jun 25 14:15:48.079165 kernel: acpiphp: Slot [19] registered Jun 25 14:15:48.079181 kernel: acpiphp: Slot [20] registered Jun 25 14:15:48.079197 kernel: acpiphp: Slot [21] registered Jun 25 14:15:48.079213 kernel: acpiphp: Slot [22] registered Jun 25 14:15:48.079233 kernel: acpiphp: Slot [23] registered Jun 25 14:15:48.079249 kernel: acpiphp: Slot [24] registered Jun 25 14:15:48.079265 kernel: acpiphp: Slot [25] registered Jun 25 14:15:48.079280 kernel: acpiphp: Slot [26] registered Jun 25 14:15:48.079296 kernel: acpiphp: Slot [27] registered Jun 25 14:15:48.079312 kernel: acpiphp: Slot [28] registered Jun 25 14:15:48.079328 kernel: acpiphp: Slot [29] registered Jun 25 14:15:48.079343 kernel: acpiphp: Slot [30] registered Jun 25 14:15:48.079359 kernel: acpiphp: Slot [31] registered Jun 25 14:15:48.079375 kernel: PCI host bridge to bus 0000:00 Jun 25 14:15:48.079574 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jun 25 14:15:48.079986 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 25 14:15:48.080160 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jun 25 14:15:48.080334 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jun 25 14:15:48.080554 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jun 25 14:15:48.080928 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jun 25 14:15:48.081142 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jun 25 14:15:48.081353 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jun 25 14:15:48.081571 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jun 25 14:15:48.081816 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 25 14:15:48.082024 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jun 25 14:15:48.082220 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jun 25 14:15:48.082416 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jun 25 14:15:48.082614 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jun 25 14:15:48.082846 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 25 14:15:48.083050 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jun 25 14:15:48.083245 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jun 25 14:15:48.083441 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jun 25 14:15:48.083627 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jun 25 14:15:48.083867 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jun 25 14:15:48.084053 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jun 25 14:15:48.084229 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 25 14:15:48.084404 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jun 25 14:15:48.084426 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 25 14:15:48.084443 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 25 14:15:48.084459 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 25 14:15:48.084476 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 25 14:15:48.084491 kernel: iommu: Default domain type: Translated Jun 25 14:15:48.084513 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 14:15:48.084529 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 14:15:48.084545 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 14:15:48.084561 kernel: PTP clock support registered Jun 25 14:15:48.084577 kernel: Registered efivars operations Jun 25 14:15:48.084593 kernel: vgaarb: loaded Jun 25 14:15:48.084609 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 14:15:48.084625 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 14:15:48.084641 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 14:15:48.084739 kernel: pnp: PnP ACPI init Jun 25 14:15:48.084943 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jun 25 14:15:48.084968 kernel: pnp: PnP ACPI: found 1 devices Jun 25 14:15:48.084985 kernel: NET: Registered PF_INET protocol family Jun 25 14:15:48.085001 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 14:15:48.085018 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 14:15:48.085034 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 14:15:48.085050 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 14:15:48.085072 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 14:15:48.085089 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 14:15:48.085105 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:15:48.085121 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:15:48.085137 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 14:15:48.085153 kernel: PCI: CLS 0 bytes, default 64 Jun 25 14:15:48.085169 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jun 25 14:15:48.085185 kernel: kvm [1]: HYP mode not available Jun 25 14:15:48.085201 kernel: Initialise system trusted keyrings Jun 25 14:15:48.085222 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 14:15:48.085239 kernel: Key type asymmetric registered Jun 25 14:15:48.085255 kernel: Asymmetric key parser 'x509' registered Jun 25 14:15:48.085271 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 14:15:48.085287 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 14:15:48.085303 kernel: io scheduler mq-deadline registered Jun 25 14:15:48.085319 kernel: io scheduler kyber registered Jun 25 14:15:48.085335 kernel: io scheduler bfq registered Jun 25 14:15:48.085563 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jun 25 14:15:48.085595 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 25 14:15:48.085612 kernel: ACPI: button: Power Button [PWRB] Jun 25 14:15:48.085628 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jun 25 14:15:48.085644 kernel: ACPI: button: Sleep Button [SLPB] Jun 25 14:15:48.095206 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 14:15:48.095244 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jun 25 14:15:48.095500 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jun 25 14:15:48.095526 kernel: printk: console [ttyS0] disabled Jun 25 14:15:48.095553 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jun 25 14:15:48.095570 kernel: printk: console [ttyS0] enabled Jun 25 14:15:48.095587 kernel: printk: bootconsole [uart0] disabled Jun 25 14:15:48.095604 kernel: thunder_xcv, ver 1.0 Jun 25 14:15:48.095620 kernel: thunder_bgx, ver 1.0 Jun 25 14:15:48.095654 kernel: nicpf, ver 1.0 Jun 25 14:15:48.095705 kernel: nicvf, ver 1.0 Jun 25 14:15:48.095935 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 14:15:48.096152 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T14:15:47 UTC (1719324947) Jun 25 14:15:48.096184 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 14:15:48.096201 kernel: NET: Registered PF_INET6 protocol family Jun 25 14:15:48.096218 kernel: Segment Routing with IPv6 Jun 25 14:15:48.096234 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 14:15:48.096250 kernel: NET: Registered PF_PACKET protocol family Jun 25 14:15:48.096267 kernel: Key type dns_resolver registered Jun 25 14:15:48.096283 kernel: registered taskstats version 1 Jun 25 14:15:48.096299 kernel: Loading compiled-in X.509 certificates Jun 25 14:15:48.096316 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: 0fa2e892f90caac26ef50b6d7e7f5c106b0c7e83' Jun 25 14:15:48.096336 kernel: Key type .fscrypt registered Jun 25 14:15:48.096352 kernel: Key type fscrypt-provisioning registered Jun 25 14:15:48.096368 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 14:15:48.096384 kernel: ima: Allocated hash algorithm: sha1 Jun 25 14:15:48.096400 kernel: ima: No architecture policies found Jun 25 14:15:48.096416 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 14:15:48.096432 kernel: clk: Disabling unused clocks Jun 25 14:15:48.096448 kernel: Freeing unused kernel memory: 34688K Jun 25 14:15:48.096464 kernel: Run /init as init process Jun 25 14:15:48.096485 kernel: with arguments: Jun 25 14:15:48.096501 kernel: /init Jun 25 14:15:48.096516 kernel: with environment: Jun 25 14:15:48.096532 kernel: HOME=/ Jun 25 14:15:48.096548 kernel: TERM=linux Jun 25 14:15:48.096564 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 14:15:48.096584 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:15:48.096605 systemd[1]: Detected virtualization amazon. Jun 25 14:15:48.096629 systemd[1]: Detected architecture arm64. Jun 25 14:15:48.096646 systemd[1]: Running in initrd. Jun 25 14:15:48.096686 systemd[1]: No hostname configured, using default hostname. Jun 25 14:15:48.096706 systemd[1]: Hostname set to . Jun 25 14:15:48.096725 systemd[1]: Initializing machine ID from VM UUID. Jun 25 14:15:48.096743 systemd[1]: Queued start job for default target initrd.target. Jun 25 14:15:48.096761 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:15:48.096779 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:15:48.096802 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:15:48.096820 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:15:48.096838 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:15:48.096855 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:15:48.096874 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:15:48.096892 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:15:48.096910 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 14:15:48.096950 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 14:15:48.096970 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 14:15:48.096989 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:15:48.097006 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:15:48.097025 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:15:48.097042 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:15:48.097061 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:15:48.097078 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 14:15:48.097101 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 14:15:48.097119 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:15:48.097137 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:15:48.097154 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 14:15:48.097172 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:15:48.097190 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 14:15:48.097208 kernel: audit: type=1130 audit(1719324948.060:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.097225 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:15:48.097247 kernel: audit: type=1130 audit(1719324948.067:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.097265 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 14:15:48.097283 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:15:48.097304 systemd-journald[242]: Journal started Jun 25 14:15:48.097411 systemd-journald[242]: Runtime Journal (/run/log/journal/ec2b91c671faa917c2cf531782cc2af4) is 8.0M, max 75.3M, 67.3M free. Jun 25 14:15:48.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.073119 systemd-modules-load[243]: Inserted module 'overlay' Jun 25 14:15:48.105109 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:15:48.103000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.110717 kernel: audit: type=1130 audit(1719324948.103:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.115696 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 14:15:48.125014 kernel: audit: type=1130 audit(1719324948.117:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.116955 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:15:48.118017 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:15:48.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.131631 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:15:48.144145 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 14:15:48.148781 kernel: audit: type=1130 audit(1719324948.135:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.148822 kernel: Bridge firewalling registered Jun 25 14:15:48.149144 systemd-modules-load[243]: Inserted module 'br_netfilter' Jun 25 14:15:48.166511 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:15:48.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.168000 audit: BPF prog-id=6 op=LOAD Jun 25 14:15:48.178679 kernel: audit: type=1130 audit(1719324948.167:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.178716 kernel: SCSI subsystem initialized Jun 25 14:15:48.178747 kernel: audit: type=1334 audit(1719324948.168:8): prog-id=6 op=LOAD Jun 25 14:15:48.179998 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:15:48.197805 dracut-cmdline[262]: dracut-dracut-053 Jun 25 14:15:48.197805 dracut-cmdline[262]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:15:48.224233 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 14:15:48.224298 kernel: device-mapper: uevent: version 1.0.3 Jun 25 14:15:48.236700 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 14:15:48.245153 systemd-modules-load[243]: Inserted module 'dm_multipath' Jun 25 14:15:48.249740 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:15:48.257864 kernel: audit: type=1130 audit(1719324948.250:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.259023 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:15:48.279619 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:15:48.287051 kernel: audit: type=1130 audit(1719324948.280:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.289569 systemd-resolved[266]: Positive Trust Anchors: Jun 25 14:15:48.289596 systemd-resolved[266]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:15:48.289652 systemd-resolved[266]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:15:48.379697 kernel: Loading iSCSI transport class v2.0-870. Jun 25 14:15:48.392707 kernel: iscsi: registered transport (tcp) Jun 25 14:15:48.415196 kernel: iscsi: registered transport (qla4xxx) Jun 25 14:15:48.415280 kernel: QLogic iSCSI HBA Driver Jun 25 14:15:48.487705 kernel: random: crng init done Jun 25 14:15:48.488223 systemd-resolved[266]: Defaulting to hostname 'linux'. Jun 25 14:15:48.492218 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:15:48.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.494899 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:15:48.521359 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 14:15:48.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.532477 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 14:15:48.611719 kernel: raid6: neonx8 gen() 6640 MB/s Jun 25 14:15:48.628694 kernel: raid6: neonx4 gen() 6499 MB/s Jun 25 14:15:48.645690 kernel: raid6: neonx2 gen() 5455 MB/s Jun 25 14:15:48.662691 kernel: raid6: neonx1 gen() 3963 MB/s Jun 25 14:15:48.679690 kernel: raid6: int64x8 gen() 3798 MB/s Jun 25 14:15:48.696691 kernel: raid6: int64x4 gen() 3716 MB/s Jun 25 14:15:48.713690 kernel: raid6: int64x2 gen() 3591 MB/s Jun 25 14:15:48.731367 kernel: raid6: int64x1 gen() 2775 MB/s Jun 25 14:15:48.731397 kernel: raid6: using algorithm neonx8 gen() 6640 MB/s Jun 25 14:15:48.749353 kernel: raid6: .... xor() 4914 MB/s, rmw enabled Jun 25 14:15:48.749410 kernel: raid6: using neon recovery algorithm Jun 25 14:15:48.756695 kernel: xor: measuring software checksum speed Jun 25 14:15:48.758690 kernel: 8regs : 11107 MB/sec Jun 25 14:15:48.760694 kernel: 32regs : 12029 MB/sec Jun 25 14:15:48.762727 kernel: arm64_neon : 9512 MB/sec Jun 25 14:15:48.762759 kernel: xor: using function: 32regs (12029 MB/sec) Jun 25 14:15:48.853711 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jun 25 14:15:48.872877 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:15:48.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.887000 audit: BPF prog-id=7 op=LOAD Jun 25 14:15:48.888000 audit: BPF prog-id=8 op=LOAD Jun 25 14:15:48.892958 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:15:48.929815 systemd-udevd[443]: Using default interface naming scheme 'v252'. Jun 25 14:15:48.939451 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:15:48.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:48.949955 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 14:15:48.977431 dracut-pre-trigger[445]: rd.md=0: removing MD RAID activation Jun 25 14:15:49.038560 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:15:49.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:49.051339 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:15:49.158637 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:15:49.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:49.274216 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 25 14:15:49.274286 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jun 25 14:15:49.291811 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 25 14:15:49.292077 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 25 14:15:49.292282 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:34:fa:ff:f8:c3 Jun 25 14:15:49.298523 (udev-worker)[494]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:15:49.318299 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jun 25 14:15:49.318357 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 25 14:15:49.327695 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 25 14:15:49.332379 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 14:15:49.332429 kernel: GPT:9289727 != 16777215 Jun 25 14:15:49.332452 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 14:15:49.332474 kernel: GPT:9289727 != 16777215 Jun 25 14:15:49.333963 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 14:15:49.333998 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 14:15:49.417711 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (493) Jun 25 14:15:49.438924 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 25 14:15:49.486790 kernel: BTRFS: device fsid 4f04fb4d-edd3-40b1-b587-481b761003a7 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (483) Jun 25 14:15:49.493233 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 14:15:49.558572 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 25 14:15:49.571384 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 25 14:15:49.576830 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 25 14:15:49.589586 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 14:15:49.600609 disk-uuid[594]: Primary Header is updated. Jun 25 14:15:49.600609 disk-uuid[594]: Secondary Entries is updated. Jun 25 14:15:49.600609 disk-uuid[594]: Secondary Header is updated. Jun 25 14:15:49.611705 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 14:15:49.616701 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 14:15:49.626703 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 14:15:50.630696 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 14:15:50.631153 disk-uuid[595]: The operation has completed successfully. Jun 25 14:15:50.805484 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 14:15:50.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:50.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:50.805711 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 14:15:50.829362 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 14:15:50.837201 sh[937]: Success Jun 25 14:15:50.865813 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 14:15:50.954350 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 14:15:50.963689 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 14:15:50.969189 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 14:15:50.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:50.992696 kernel: BTRFS info (device dm-0): first mount of filesystem 4f04fb4d-edd3-40b1-b587-481b761003a7 Jun 25 14:15:50.992776 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:15:50.992801 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 14:15:50.994129 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 14:15:50.995277 kernel: BTRFS info (device dm-0): using free space tree Jun 25 14:15:51.106707 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 25 14:15:51.132871 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 14:15:51.134238 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 14:15:51.149054 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 14:15:51.155862 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 14:15:51.180317 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:15:51.180382 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:15:51.181565 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 14:15:51.197709 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 14:15:51.213323 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 14:15:51.216694 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:15:51.231932 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 14:15:51.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:51.240024 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 14:15:51.325553 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:15:51.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:51.326000 audit: BPF prog-id=9 op=LOAD Jun 25 14:15:51.337728 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:15:51.382398 systemd-networkd[1127]: lo: Link UP Jun 25 14:15:51.382422 systemd-networkd[1127]: lo: Gained carrier Jun 25 14:15:51.383622 systemd-networkd[1127]: Enumeration completed Jun 25 14:15:51.384114 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:15:51.384440 systemd-networkd[1127]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:15:51.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:51.384446 systemd-networkd[1127]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:15:51.392936 systemd[1]: Reached target network.target - Network. Jun 25 14:15:51.395379 systemd-networkd[1127]: eth0: Link UP Jun 25 14:15:51.395387 systemd-networkd[1127]: eth0: Gained carrier Jun 25 14:15:51.395402 systemd-networkd[1127]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:15:51.401078 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:15:51.409972 systemd-networkd[1127]: eth0: DHCPv4 address 172.31.16.245/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 14:15:51.415238 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:15:51.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:51.430002 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 14:15:51.439601 iscsid[1132]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:15:51.439601 iscsid[1132]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jun 25 14:15:51.439601 iscsid[1132]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 14:15:51.439601 iscsid[1132]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 14:15:51.439601 iscsid[1132]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 14:15:51.439601 iscsid[1132]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:15:51.439601 iscsid[1132]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 14:15:51.440847 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 14:15:51.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:51.470833 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 14:15:51.500133 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 14:15:51.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:51.503986 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:15:51.505959 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:15:51.509186 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:15:51.524787 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 14:15:51.552146 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:15:51.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:51.626254 ignition[1061]: Ignition 2.15.0 Jun 25 14:15:51.626749 ignition[1061]: Stage: fetch-offline Jun 25 14:15:51.627289 ignition[1061]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:15:51.627314 ignition[1061]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:15:51.628536 ignition[1061]: Ignition finished successfully Jun 25 14:15:51.636583 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:15:51.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:51.644400 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 14:15:51.665260 ignition[1152]: Ignition 2.15.0 Jun 25 14:15:51.665286 ignition[1152]: Stage: fetch Jun 25 14:15:51.665999 ignition[1152]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:15:51.666028 ignition[1152]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:15:51.666209 ignition[1152]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:15:51.676839 ignition[1152]: PUT result: OK Jun 25 14:15:51.680013 ignition[1152]: parsed url from cmdline: "" Jun 25 14:15:51.680145 ignition[1152]: no config URL provided Jun 25 14:15:51.681234 ignition[1152]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 14:15:51.681264 ignition[1152]: no config at "/usr/lib/ignition/user.ign" Jun 25 14:15:51.681299 ignition[1152]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:15:51.687116 ignition[1152]: PUT result: OK Jun 25 14:15:51.688329 ignition[1152]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 25 14:15:51.691982 ignition[1152]: GET result: OK Jun 25 14:15:51.693270 ignition[1152]: parsing config with SHA512: 0a36e1a372ed6a43a3233cb702dd4530c4c6bc11755acff7beb413c2a0e552147b7d4364c8dc1061dbfd632a45201f57f4dcf4b59c497cb3ed4c11e1fbb37ed4 Jun 25 14:15:51.701215 unknown[1152]: fetched base config from "system" Jun 25 14:15:51.701650 unknown[1152]: fetched base config from "system" Jun 25 14:15:51.702118 unknown[1152]: fetched user config from "aws" Jun 25 14:15:51.707608 ignition[1152]: fetch: fetch complete Jun 25 14:15:51.707634 ignition[1152]: fetch: fetch passed Jun 25 14:15:51.707782 ignition[1152]: Ignition finished successfully Jun 25 14:15:51.714120 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 14:15:51.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:51.728982 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 14:15:51.756252 ignition[1158]: Ignition 2.15.0 Jun 25 14:15:51.756282 ignition[1158]: Stage: kargs Jun 25 14:15:51.757440 ignition[1158]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:15:51.757758 ignition[1158]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:15:51.757923 ignition[1158]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:15:51.762496 ignition[1158]: PUT result: OK Jun 25 14:15:51.768589 ignition[1158]: kargs: kargs passed Jun 25 14:15:51.768732 ignition[1158]: Ignition finished successfully Jun 25 14:15:51.771934 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 14:15:51.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:51.783209 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 14:15:51.805631 ignition[1165]: Ignition 2.15.0 Jun 25 14:15:51.806391 ignition[1165]: Stage: disks Jun 25 14:15:51.808565 ignition[1165]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:15:51.808633 ignition[1165]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:15:51.810327 ignition[1165]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:15:51.815109 ignition[1165]: PUT result: OK Jun 25 14:15:51.819695 ignition[1165]: disks: disks passed Jun 25 14:15:51.819818 ignition[1165]: Ignition finished successfully Jun 25 14:15:51.823886 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 14:15:51.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:51.826140 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 14:15:51.828132 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:15:51.830141 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:15:51.833770 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:15:51.835696 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:15:51.854388 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 14:15:51.902604 systemd-fsck[1173]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 14:15:51.909614 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 14:15:51.919266 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 14:15:51.919305 kernel: audit: type=1130 audit(1719324951.910:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:51.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:51.924912 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 14:15:52.011694 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 14:15:52.013539 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 14:15:52.016437 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 14:15:52.041893 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:15:52.050496 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 14:15:52.055470 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 14:15:52.058964 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 14:15:52.059312 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:15:52.068816 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 14:15:52.076709 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1190) Jun 25 14:15:52.081984 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:15:52.082051 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:15:52.082075 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 14:15:52.083642 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 14:15:52.093970 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 14:15:52.096261 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:15:52.481202 initrd-setup-root[1214]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 14:15:52.502533 initrd-setup-root[1221]: cut: /sysroot/etc/group: No such file or directory Jun 25 14:15:52.511090 initrd-setup-root[1228]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 14:15:52.519091 initrd-setup-root[1235]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 14:15:52.848747 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 14:15:52.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:52.856702 kernel: audit: type=1130 audit(1719324952.849:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:52.858559 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 14:15:52.863718 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 14:15:52.878509 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 14:15:52.881335 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:15:52.911506 ignition[1301]: INFO : Ignition 2.15.0 Jun 25 14:15:52.911506 ignition[1301]: INFO : Stage: mount Jun 25 14:15:52.915232 ignition[1301]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:15:52.915232 ignition[1301]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:15:52.915232 ignition[1301]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:15:52.921713 ignition[1301]: INFO : PUT result: OK Jun 25 14:15:52.923987 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 14:15:52.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:52.931734 kernel: audit: type=1130 audit(1719324952.924:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:52.933590 ignition[1301]: INFO : mount: mount passed Jun 25 14:15:52.935106 ignition[1301]: INFO : Ignition finished successfully Jun 25 14:15:52.938372 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 14:15:52.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:52.946843 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 14:15:52.954634 kernel: audit: type=1130 audit(1719324952.939:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:53.023251 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:15:53.040801 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1311) Jun 25 14:15:53.044173 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:15:53.044243 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:15:53.044268 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 14:15:53.049701 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 14:15:53.053312 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:15:53.063868 systemd-networkd[1127]: eth0: Gained IPv6LL Jun 25 14:15:53.087244 ignition[1329]: INFO : Ignition 2.15.0 Jun 25 14:15:53.089122 ignition[1329]: INFO : Stage: files Jun 25 14:15:53.091098 ignition[1329]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:15:53.093100 ignition[1329]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:15:53.095533 ignition[1329]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:15:53.098926 ignition[1329]: INFO : PUT result: OK Jun 25 14:15:53.117274 ignition[1329]: DEBUG : files: compiled without relabeling support, skipping Jun 25 14:15:53.141725 ignition[1329]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 14:15:53.144306 ignition[1329]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 14:15:53.186321 ignition[1329]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 14:15:53.189156 ignition[1329]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 14:15:53.192493 unknown[1329]: wrote ssh authorized keys file for user: core Jun 25 14:15:53.194722 ignition[1329]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 14:15:53.198401 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:15:53.202617 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 14:15:53.263008 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 25 14:15:53.373818 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 14:15:53.378393 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Jun 25 14:15:53.833336 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jun 25 14:15:54.385605 ignition[1329]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Jun 25 14:15:54.392147 ignition[1329]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jun 25 14:15:54.392147 ignition[1329]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:15:54.392147 ignition[1329]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:15:54.392147 ignition[1329]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jun 25 14:15:54.392147 ignition[1329]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jun 25 14:15:54.392147 ignition[1329]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 14:15:54.392147 ignition[1329]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:15:54.392147 ignition[1329]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:15:54.392147 ignition[1329]: INFO : files: files passed Jun 25 14:15:54.392147 ignition[1329]: INFO : Ignition finished successfully Jun 25 14:15:54.425799 kernel: audit: type=1130 audit(1719324954.419:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.417198 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 14:15:54.431736 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 14:15:54.439928 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 14:15:54.446012 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 14:15:54.458960 kernel: audit: type=1130 audit(1719324954.448:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.459001 kernel: audit: type=1131 audit(1719324954.448:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.448437 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 14:15:54.474800 initrd-setup-root-after-ignition[1355]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:15:54.474800 initrd-setup-root-after-ignition[1355]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:15:54.483355 initrd-setup-root-after-ignition[1359]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:15:54.487570 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:15:54.498095 kernel: audit: type=1130 audit(1719324954.486:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.490010 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 14:15:54.507973 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 14:15:54.551420 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 14:15:54.552253 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 14:15:54.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.561694 kernel: audit: type=1130 audit(1719324954.556:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.565611 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 14:15:54.567555 kernel: audit: type=1131 audit(1719324954.560:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.569694 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 14:15:54.576739 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 14:15:54.590000 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 14:15:54.615349 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:15:54.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.628989 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 14:15:54.657204 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 14:15:54.657580 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 14:15:54.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.662000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.664461 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:15:54.668537 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:15:54.672488 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 14:15:54.675739 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 14:15:54.675934 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:15:54.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.681151 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 14:15:54.686862 systemd[1]: Stopped target basic.target - Basic System. Jun 25 14:15:54.688735 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 14:15:54.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.690955 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:15:54.695694 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 14:15:54.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.698998 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 14:15:54.700871 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:15:54.700940 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 14:15:54.701002 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 14:15:54.744132 iscsid[1132]: iscsid shutting down. Jun 25 14:15:54.701054 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:15:54.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.701103 systemd[1]: Stopped target swap.target - Swaps. Jun 25 14:15:54.701159 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 14:15:54.701256 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:15:54.701419 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:15:54.702039 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 14:15:54.702114 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 14:15:54.702462 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 14:15:54.702536 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:15:54.702953 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 14:15:54.703025 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 14:15:54.734354 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 14:15:54.786262 ignition[1373]: INFO : Ignition 2.15.0 Jun 25 14:15:54.786262 ignition[1373]: INFO : Stage: umount Jun 25 14:15:54.742375 systemd[1]: Stopping iscsid.service - Open-iSCSI... Jun 25 14:15:54.792527 ignition[1373]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:15:54.792527 ignition[1373]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:15:54.792527 ignition[1373]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:15:54.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.752803 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 14:15:54.752934 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:15:54.817484 ignition[1373]: INFO : PUT result: OK Jun 25 14:15:54.817484 ignition[1373]: INFO : umount: umount passed Jun 25 14:15:54.817484 ignition[1373]: INFO : Ignition finished successfully Jun 25 14:15:54.764203 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 14:15:54.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.792445 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 14:15:54.792615 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:15:54.801311 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 14:15:54.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.801450 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:15:54.839490 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 14:15:54.840464 systemd[1]: iscsid.service: Deactivated successfully. Jun 25 14:15:54.840689 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jun 25 14:15:54.852712 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 14:15:54.852909 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 14:15:54.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.857408 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 14:15:54.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.879000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.857625 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 14:15:54.860149 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 14:15:54.860255 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 14:15:54.876354 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 14:15:54.876475 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 14:15:54.878905 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 14:15:54.879012 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 14:15:54.880296 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 14:15:54.880404 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:15:54.881125 systemd[1]: Stopped target paths.target - Path Units. Jun 25 14:15:54.881650 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 14:15:54.892190 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:15:54.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.896228 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 14:15:54.903922 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 14:15:54.905741 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 14:15:54.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.905929 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:15:54.912410 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 14:15:54.912518 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 14:15:54.923129 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 14:15:54.923238 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 14:15:54.928149 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:15:54.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.939183 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 14:15:54.939401 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:15:54.951524 systemd[1]: Stopped target network.target - Network. Jun 25 14:15:54.954469 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 14:15:54.954559 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:15:54.956739 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 14:15:54.960597 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 14:15:54.965803 systemd-networkd[1127]: eth0: DHCPv6 lease lost Jun 25 14:15:54.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:54.978873 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 14:15:54.979101 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 14:15:54.980641 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 14:15:54.980934 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 14:15:54.982464 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 14:15:54.982546 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:15:55.000000 audit: BPF prog-id=6 op=UNLOAD Jun 25 14:15:55.000000 audit: BPF prog-id=9 op=UNLOAD Jun 25 14:15:55.001889 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 14:15:55.005000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:55.004594 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 14:15:55.004762 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:15:55.007622 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 14:15:55.007883 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:15:55.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:55.023632 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 14:15:55.023786 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 14:15:55.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:55.030250 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 14:15:55.030357 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:15:55.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:55.044900 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:15:55.051041 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 14:15:55.051202 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 14:15:55.074229 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 14:15:55.076294 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:15:55.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:55.081246 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 14:15:55.081370 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 14:15:55.085478 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 14:15:55.085601 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:15:55.091450 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 14:15:55.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:55.091555 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:15:55.103190 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 14:15:55.103297 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 14:15:55.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:55.116361 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 14:15:55.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:55.116465 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:15:55.125635 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 14:15:55.133795 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 14:15:55.134268 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:15:55.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:55.144333 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 14:15:55.145503 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 14:15:55.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:55.148776 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 14:15:55.148988 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 14:15:55.157194 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 14:15:55.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:55.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:55.174708 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 14:15:55.192097 systemd[1]: Switching root. Jun 25 14:15:55.218615 systemd-journald[242]: Journal stopped Jun 25 14:15:57.498065 systemd-journald[242]: Received SIGTERM from PID 1 (systemd). Jun 25 14:15:57.498218 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 14:15:57.498257 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 14:15:57.498291 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 14:15:57.498324 kernel: SELinux: policy capability open_perms=1 Jun 25 14:15:57.498363 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 14:15:57.498399 kernel: SELinux: policy capability always_check_network=0 Jun 25 14:15:57.498431 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 14:15:57.498463 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 14:15:57.498495 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 14:15:57.498531 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 14:15:57.498565 systemd[1]: Successfully loaded SELinux policy in 115.363ms. Jun 25 14:15:57.498619 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 30.659ms. Jun 25 14:15:57.498694 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:15:57.498737 systemd[1]: Detected virtualization amazon. Jun 25 14:15:57.498772 systemd[1]: Detected architecture arm64. Jun 25 14:15:57.498805 systemd[1]: Detected first boot. Jun 25 14:15:57.498836 systemd[1]: Initializing machine ID from VM UUID. Jun 25 14:15:57.498875 systemd[1]: Populated /etc with preset unit settings. Jun 25 14:15:57.498910 kernel: kauditd_printk_skb: 42 callbacks suppressed Jun 25 14:15:57.498946 kernel: audit: type=1334 audit(1719324957.050:86): prog-id=12 op=LOAD Jun 25 14:15:57.498979 kernel: audit: type=1334 audit(1719324957.052:87): prog-id=3 op=UNLOAD Jun 25 14:15:57.499012 kernel: audit: type=1334 audit(1719324957.054:88): prog-id=13 op=LOAD Jun 25 14:15:57.499043 kernel: audit: type=1334 audit(1719324957.055:89): prog-id=14 op=LOAD Jun 25 14:15:57.499077 kernel: audit: type=1334 audit(1719324957.055:90): prog-id=4 op=UNLOAD Jun 25 14:15:57.499120 kernel: audit: type=1334 audit(1719324957.055:91): prog-id=5 op=UNLOAD Jun 25 14:15:57.499159 kernel: audit: type=1334 audit(1719324957.056:92): prog-id=15 op=LOAD Jun 25 14:15:57.499194 kernel: audit: type=1334 audit(1719324957.056:93): prog-id=12 op=UNLOAD Jun 25 14:15:57.499232 kernel: audit: type=1334 audit(1719324957.057:94): prog-id=16 op=LOAD Jun 25 14:15:57.499268 kernel: audit: type=1334 audit(1719324957.059:95): prog-id=17 op=LOAD Jun 25 14:15:57.499304 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 25 14:15:57.499343 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 25 14:15:57.499383 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 25 14:15:57.499422 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 14:15:57.499468 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 14:15:57.499507 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 14:15:57.499541 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 14:15:57.499575 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 14:15:57.499609 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 14:15:57.499645 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 14:15:57.499738 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 14:15:57.499780 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:15:57.499818 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 14:15:57.499856 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 14:15:57.499889 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 14:15:57.499930 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 25 14:15:57.499970 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 25 14:15:57.500010 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 25 14:15:57.500051 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 14:15:57.500095 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:15:57.500137 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:15:57.500191 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:15:57.500229 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:15:57.500263 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 14:15:57.500297 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 14:15:57.500331 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 14:15:57.500362 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:15:57.500396 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:15:57.500427 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:15:57.500462 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 14:15:57.500496 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 14:15:57.500542 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 14:15:57.500576 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 14:15:57.500612 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 14:15:57.500647 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 14:15:57.507313 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 14:15:57.507360 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 14:15:57.507393 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:15:57.507433 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:15:57.507464 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 14:15:57.507497 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:15:57.507529 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:15:57.507560 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:15:57.507593 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 14:15:57.507626 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:15:57.507683 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 14:15:57.507766 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 25 14:15:57.507993 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 25 14:15:57.508038 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 25 14:15:57.508068 systemd[1]: Stopped systemd-fsck-usr.service. Jun 25 14:15:57.508132 systemd[1]: Stopped systemd-journald.service - Journal Service. Jun 25 14:15:57.508268 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:15:57.508303 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:15:57.508338 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 14:15:57.508372 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 14:15:57.508440 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:15:57.508613 systemd[1]: verity-setup.service: Deactivated successfully. Jun 25 14:15:57.508645 systemd[1]: Stopped verity-setup.service. Jun 25 14:15:57.508702 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 14:15:57.508766 kernel: loop: module loaded Jun 25 14:15:57.508985 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 14:15:57.509020 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 14:15:57.509065 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 14:15:57.509097 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 14:15:57.509166 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 14:15:57.522564 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:15:57.522603 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 14:15:57.522635 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 14:15:57.522706 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:15:57.522742 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:15:57.522774 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:15:57.522809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:15:57.522840 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:15:57.522879 kernel: fuse: init (API version 7.37) Jun 25 14:15:57.522922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:15:57.522957 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 14:15:57.522989 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 14:15:57.523020 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 14:15:57.523056 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 14:15:57.523090 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 14:15:57.523120 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 14:15:57.523153 systemd-journald[1481]: Journal started Jun 25 14:15:57.523255 systemd-journald[1481]: Runtime Journal (/run/log/journal/ec2b91c671faa917c2cf531782cc2af4) is 8.0M, max 75.3M, 67.3M free. Jun 25 14:15:55.662000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 25 14:15:55.919000 audit: BPF prog-id=10 op=LOAD Jun 25 14:15:55.919000 audit: BPF prog-id=10 op=UNLOAD Jun 25 14:15:55.919000 audit: BPF prog-id=11 op=LOAD Jun 25 14:15:55.919000 audit: BPF prog-id=11 op=UNLOAD Jun 25 14:15:57.050000 audit: BPF prog-id=12 op=LOAD Jun 25 14:15:57.052000 audit: BPF prog-id=3 op=UNLOAD Jun 25 14:15:57.054000 audit: BPF prog-id=13 op=LOAD Jun 25 14:15:57.055000 audit: BPF prog-id=14 op=LOAD Jun 25 14:15:57.055000 audit: BPF prog-id=4 op=UNLOAD Jun 25 14:15:57.055000 audit: BPF prog-id=5 op=UNLOAD Jun 25 14:15:57.056000 audit: BPF prog-id=15 op=LOAD Jun 25 14:15:57.056000 audit: BPF prog-id=12 op=UNLOAD Jun 25 14:15:57.057000 audit: BPF prog-id=16 op=LOAD Jun 25 14:15:57.059000 audit: BPF prog-id=17 op=LOAD Jun 25 14:15:57.059000 audit: BPF prog-id=13 op=UNLOAD Jun 25 14:15:57.059000 audit: BPF prog-id=14 op=UNLOAD Jun 25 14:15:57.060000 audit: BPF prog-id=18 op=LOAD Jun 25 14:15:57.060000 audit: BPF prog-id=15 op=UNLOAD Jun 25 14:15:57.061000 audit: BPF prog-id=19 op=LOAD Jun 25 14:15:57.062000 audit: BPF prog-id=20 op=LOAD Jun 25 14:15:57.062000 audit: BPF prog-id=16 op=UNLOAD Jun 25 14:15:57.062000 audit: BPF prog-id=17 op=UNLOAD Jun 25 14:15:57.063000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.067000 audit: BPF prog-id=18 op=UNLOAD Jun 25 14:15:57.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.329000 audit: BPF prog-id=21 op=LOAD Jun 25 14:15:57.329000 audit: BPF prog-id=22 op=LOAD Jun 25 14:15:57.329000 audit: BPF prog-id=23 op=LOAD Jun 25 14:15:57.329000 audit: BPF prog-id=19 op=UNLOAD Jun 25 14:15:57.329000 audit: BPF prog-id=20 op=UNLOAD Jun 25 14:15:57.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.551502 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 14:15:57.551566 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 14:15:57.551607 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 14:15:57.551646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:15:57.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.461000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 14:15:57.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.461000 audit[1481]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffcb1d68c0 a2=4000 a3=1 items=0 ppid=1 pid=1481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:15:57.461000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 14:15:57.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.036202 systemd[1]: Queued start job for default target multi-user.target. Jun 25 14:15:57.592784 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 14:15:57.592837 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:15:57.592876 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:15:57.592914 kernel: ACPI: bus type drm_connector registered Jun 25 14:15:57.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.036225 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 25 14:15:57.064523 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 25 14:15:57.577652 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:15:57.578037 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:15:57.581012 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:15:57.583586 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 14:15:57.586514 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 14:15:57.599085 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 14:15:57.604298 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:15:57.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.636408 systemd-journald[1481]: Time spent on flushing to /var/log/journal/ec2b91c671faa917c2cf531782cc2af4 is 59.217ms for 1057 entries. Jun 25 14:15:57.636408 systemd-journald[1481]: System Journal (/var/log/journal/ec2b91c671faa917c2cf531782cc2af4) is 8.0M, max 195.6M, 187.6M free. Jun 25 14:15:57.706419 systemd-journald[1481]: Received client request to flush runtime journal. Jun 25 14:15:57.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.622549 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 14:15:57.625084 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 14:15:57.670857 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:15:57.708366 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 14:15:57.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.734893 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:15:57.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.742151 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 14:15:57.744894 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 14:15:57.756000 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 14:15:57.774890 udevadm[1516]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jun 25 14:15:57.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:57.823969 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 14:15:58.621014 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 14:15:58.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:58.622000 audit: BPF prog-id=24 op=LOAD Jun 25 14:15:58.622000 audit: BPF prog-id=25 op=LOAD Jun 25 14:15:58.622000 audit: BPF prog-id=7 op=UNLOAD Jun 25 14:15:58.622000 audit: BPF prog-id=8 op=UNLOAD Jun 25 14:15:58.628047 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:15:58.678366 systemd-udevd[1519]: Using default interface naming scheme 'v252'. Jun 25 14:15:58.747299 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:15:58.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:58.749000 audit: BPF prog-id=26 op=LOAD Jun 25 14:15:58.757042 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:15:58.760000 audit: BPF prog-id=27 op=LOAD Jun 25 14:15:58.760000 audit: BPF prog-id=28 op=LOAD Jun 25 14:15:58.760000 audit: BPF prog-id=29 op=LOAD Jun 25 14:15:58.764283 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 14:15:58.860464 (udev-worker)[1526]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:15:58.871516 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 25 14:15:58.872426 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 14:15:58.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:58.880717 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1522) Jun 25 14:15:59.045360 systemd-networkd[1524]: lo: Link UP Jun 25 14:15:59.045381 systemd-networkd[1524]: lo: Gained carrier Jun 25 14:15:59.046340 systemd-networkd[1524]: Enumeration completed Jun 25 14:15:59.046526 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:15:59.046564 systemd-networkd[1524]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:15:59.046571 systemd-networkd[1524]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:15:59.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:59.058596 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:15:59.056756 systemd-networkd[1524]: eth0: Link UP Jun 25 14:15:59.057038 systemd-networkd[1524]: eth0: Gained carrier Jun 25 14:15:59.057064 systemd-networkd[1524]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:15:59.057459 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 14:15:59.075904 systemd-networkd[1524]: eth0: DHCPv4 address 172.31.16.245/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 14:15:59.200738 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1545) Jun 25 14:15:59.350410 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 14:15:59.354537 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 14:15:59.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:59.361308 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 14:15:59.395901 lvm[1639]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:15:59.433613 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 14:15:59.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:59.436172 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:15:59.444136 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 14:15:59.456715 lvm[1640]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:15:59.489623 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 14:15:59.491993 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:15:59.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:59.494136 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 14:15:59.494184 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:15:59.496285 systemd[1]: Reached target machines.target - Containers. Jun 25 14:15:59.510077 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 14:15:59.512365 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:15:59.512522 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:15:59.515642 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 14:15:59.522814 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 14:15:59.534124 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 14:15:59.540020 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 14:15:59.544597 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1642 (bootctl) Jun 25 14:15:59.550381 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 14:15:59.570789 kernel: loop0: detected capacity change from 0 to 59648 Jun 25 14:15:59.593521 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 14:15:59.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:59.690968 systemd-fsck[1650]: fsck.fat 4.2 (2021-01-31) Jun 25 14:15:59.690968 systemd-fsck[1650]: /dev/nvme0n1p1: 242 files, 114659/258078 clusters Jun 25 14:15:59.693052 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 14:15:59.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:59.700123 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 14:15:59.722581 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 14:15:59.748711 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 14:15:59.758355 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 14:15:59.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:15:59.771917 kernel: loop1: detected capacity change from 0 to 113264 Jun 25 14:15:59.873709 kernel: loop2: detected capacity change from 0 to 51896 Jun 25 14:15:59.978723 kernel: loop3: detected capacity change from 0 to 194096 Jun 25 14:16:00.232007 systemd-networkd[1524]: eth0: Gained IPv6LL Jun 25 14:16:00.235728 kernel: loop4: detected capacity change from 0 to 59648 Jun 25 14:16:00.239614 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 14:16:00.240000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:00.257724 kernel: loop5: detected capacity change from 0 to 113264 Jun 25 14:16:00.273704 kernel: loop6: detected capacity change from 0 to 51896 Jun 25 14:16:00.286705 kernel: loop7: detected capacity change from 0 to 194096 Jun 25 14:16:00.306178 (sd-sysext)[1669]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 25 14:16:00.307833 (sd-sysext)[1669]: Merged extensions into '/usr'. Jun 25 14:16:00.310888 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 14:16:00.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:00.320543 systemd[1]: Starting ensure-sysext.service... Jun 25 14:16:00.327782 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:16:00.373098 systemd[1]: Reloading. Jun 25 14:16:00.397398 systemd-tmpfiles[1671]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 14:16:00.401430 systemd-tmpfiles[1671]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 14:16:00.402962 systemd-tmpfiles[1671]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 14:16:00.411727 systemd-tmpfiles[1671]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 14:16:00.808267 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:16:00.836142 ldconfig[1641]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 14:16:00.971000 audit: BPF prog-id=30 op=LOAD Jun 25 14:16:00.972000 audit: BPF prog-id=27 op=UNLOAD Jun 25 14:16:00.972000 audit: BPF prog-id=31 op=LOAD Jun 25 14:16:00.972000 audit: BPF prog-id=32 op=LOAD Jun 25 14:16:00.973000 audit: BPF prog-id=28 op=UNLOAD Jun 25 14:16:00.973000 audit: BPF prog-id=29 op=UNLOAD Jun 25 14:16:00.973000 audit: BPF prog-id=33 op=LOAD Jun 25 14:16:00.974000 audit: BPF prog-id=34 op=LOAD Jun 25 14:16:00.974000 audit: BPF prog-id=24 op=UNLOAD Jun 25 14:16:00.974000 audit: BPF prog-id=25 op=UNLOAD Jun 25 14:16:00.975000 audit: BPF prog-id=35 op=LOAD Jun 25 14:16:00.975000 audit: BPF prog-id=26 op=UNLOAD Jun 25 14:16:00.978000 audit: BPF prog-id=36 op=LOAD Jun 25 14:16:00.978000 audit: BPF prog-id=21 op=UNLOAD Jun 25 14:16:00.979000 audit: BPF prog-id=37 op=LOAD Jun 25 14:16:00.979000 audit: BPF prog-id=38 op=LOAD Jun 25 14:16:00.979000 audit: BPF prog-id=22 op=UNLOAD Jun 25 14:16:00.979000 audit: BPF prog-id=23 op=UNLOAD Jun 25 14:16:01.004206 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 14:16:01.007596 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 14:16:01.008000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.010465 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 14:16:01.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.025124 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:16:01.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.035205 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:16:01.043902 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 14:16:01.049631 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 14:16:01.053000 audit: BPF prog-id=39 op=LOAD Jun 25 14:16:01.058176 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:16:01.061000 audit: BPF prog-id=40 op=LOAD Jun 25 14:16:01.070038 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 14:16:01.080992 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 14:16:01.093941 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:16:01.099402 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:16:01.108426 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:16:01.119385 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:16:01.121786 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:16:01.122131 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:16:01.127645 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:16:01.128713 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:16:01.128961 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:16:01.139827 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:16:01.151888 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:16:01.154264 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:16:01.154603 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:16:01.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.157440 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:16:01.157852 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:16:01.160000 audit[1754]: SYSTEM_BOOT pid=1754 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.163091 systemd[1]: Finished ensure-sysext.service. Jun 25 14:16:01.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.173020 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 14:16:01.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.201290 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 14:16:01.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.211028 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 14:16:01.229533 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:16:01.229939 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:16:01.232413 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:16:01.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.233326 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:16:01.233707 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:16:01.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.234000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.236120 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:16:01.238252 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:16:01.238606 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:16:01.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.256177 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 14:16:01.259180 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 14:16:01.270876 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 14:16:01.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:01.301000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 14:16:01.301000 audit[1768]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd57aefe0 a2=420 a3=0 items=0 ppid=1743 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:01.301000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 14:16:01.303625 augenrules[1768]: No rules Jun 25 14:16:01.305342 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:16:01.328647 systemd-resolved[1750]: Positive Trust Anchors: Jun 25 14:16:01.328693 systemd-resolved[1750]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:16:01.328747 systemd-resolved[1750]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:16:01.337046 systemd-resolved[1750]: Defaulting to hostname 'linux'. Jun 25 14:16:01.341330 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:16:01.343627 systemd[1]: Reached target network.target - Network. Jun 25 14:16:01.345583 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 14:16:01.347864 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:16:01.355591 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 14:16:01.357976 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:16:01.360148 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 14:16:01.363354 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 14:16:01.365463 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 14:16:01.367570 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 14:16:01.367628 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:16:01.369406 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 14:16:01.371716 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 14:16:01.374104 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 14:16:01.376185 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:16:01.378945 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 14:16:01.384025 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 14:16:01.392404 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 14:16:01.394622 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:16:01.395636 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 14:16:01.397973 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:16:01.399955 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:16:01.401939 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:16:01.402192 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:16:01.411232 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 14:16:01.419097 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 14:16:01.424914 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 14:16:01.430757 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 14:16:01.440634 jq[1779]: false Jun 25 14:16:01.447770 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 14:16:01.449842 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 14:16:01.454922 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:16:01.460187 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 14:16:01.465247 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 14:16:01.477924 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 14:16:01.483021 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 25 14:16:01.493024 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 14:16:01.499013 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 14:16:01.517081 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 14:16:01.519129 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:16:01.519281 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 14:16:01.520266 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 14:16:01.522846 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 14:16:01.531950 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 14:16:01.541100 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 14:16:01.541605 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 14:16:01.576902 dbus-daemon[1778]: [system] SELinux support is enabled Jun 25 14:16:01.577942 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 14:16:01.584201 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 14:16:01.584287 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 14:16:01.586503 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 14:16:01.586542 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 14:16:01.594578 dbus-daemon[1778]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1524 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 25 14:16:01.596395 dbus-daemon[1778]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 25 14:16:01.599761 jq[1798]: true Jun 25 14:16:01.610032 extend-filesystems[1780]: Found loop4 Jun 25 14:16:01.610032 extend-filesystems[1780]: Found loop5 Jun 25 14:16:01.610032 extend-filesystems[1780]: Found loop6 Jun 25 14:16:01.610032 extend-filesystems[1780]: Found loop7 Jun 25 14:16:01.610032 extend-filesystems[1780]: Found nvme0n1 Jun 25 14:16:01.610032 extend-filesystems[1780]: Found nvme0n1p1 Jun 25 14:16:01.610032 extend-filesystems[1780]: Found nvme0n1p2 Jun 25 14:16:01.610032 extend-filesystems[1780]: Found nvme0n1p3 Jun 25 14:16:01.610032 extend-filesystems[1780]: Found usr Jun 25 14:16:01.639967 extend-filesystems[1780]: Found nvme0n1p4 Jun 25 14:16:01.639967 extend-filesystems[1780]: Found nvme0n1p6 Jun 25 14:16:01.639967 extend-filesystems[1780]: Found nvme0n1p7 Jun 25 14:16:01.639967 extend-filesystems[1780]: Found nvme0n1p9 Jun 25 14:16:01.639967 extend-filesystems[1780]: Checking size of /dev/nvme0n1p9 Jun 25 14:16:01.620654 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 25 14:16:01.628617 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 14:16:01.631521 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 14:16:01.633061 systemd-timesyncd[1752]: Contacted time server 23.150.41.122:123 (0.flatcar.pool.ntp.org). Jun 25 14:16:01.633177 systemd-timesyncd[1752]: Initial clock synchronization to Tue 2024-06-25 14:16:01.564849 UTC. Jun 25 14:16:01.675180 tar[1801]: linux-arm64/helm Jun 25 14:16:01.697248 jq[1811]: true Jun 25 14:16:01.698776 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 14:16:01.699158 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 14:16:01.702459 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 14:16:01.713034 extend-filesystems[1780]: Resized partition /dev/nvme0n1p9 Jun 25 14:16:01.711898 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 25 14:16:01.734401 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 25 14:16:01.742129 extend-filesystems[1823]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 14:16:01.746922 update_engine[1795]: I0625 14:16:01.738925 1795 main.cc:92] Flatcar Update Engine starting Jun 25 14:16:01.752243 systemd[1]: Started update-engine.service - Update Engine. Jun 25 14:16:01.760034 update_engine[1795]: I0625 14:16:01.752350 1795 update_check_scheduler.cc:74] Next update check in 5m57s Jun 25 14:16:01.757544 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 14:16:01.778701 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 25 14:16:01.878761 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 25 14:16:01.929283 extend-filesystems[1823]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 25 14:16:01.929283 extend-filesystems[1823]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 14:16:01.929283 extend-filesystems[1823]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 25 14:16:01.943317 extend-filesystems[1780]: Resized filesystem in /dev/nvme0n1p9 Jun 25 14:16:01.933201 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 14:16:01.965568 bash[1840]: Updated "/home/core/.ssh/authorized_keys" Jun 25 14:16:01.933588 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 14:16:01.946730 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 14:16:01.957101 systemd[1]: Starting sshkeys.service... Jun 25 14:16:01.973492 systemd-logind[1794]: Watching system buttons on /dev/input/event0 (Power Button) Jun 25 14:16:01.973561 systemd-logind[1794]: Watching system buttons on /dev/input/event1 (Sleep Button) Jun 25 14:16:01.977796 systemd-logind[1794]: New seat seat0. Jun 25 14:16:02.000202 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 14:16:02.005585 amazon-ssm-agent[1822]: Initializing new seelog logger Jun 25 14:16:02.010523 amazon-ssm-agent[1822]: New Seelog Logger Creation Complete Jun 25 14:16:02.010908 amazon-ssm-agent[1822]: 2024/06/25 14:16:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:02.011024 amazon-ssm-agent[1822]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:02.011840 amazon-ssm-agent[1822]: 2024/06/25 14:16:02 processing appconfig overrides Jun 25 14:16:02.015334 amazon-ssm-agent[1822]: 2024/06/25 14:16:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:02.017575 amazon-ssm-agent[1822]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:02.017950 amazon-ssm-agent[1822]: 2024/06/25 14:16:02 processing appconfig overrides Jun 25 14:16:02.019091 amazon-ssm-agent[1822]: 2024/06/25 14:16:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:02.019250 amazon-ssm-agent[1822]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:02.019482 amazon-ssm-agent[1822]: 2024/06/25 14:16:02 processing appconfig overrides Jun 25 14:16:02.021645 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO Proxy environment variables: Jun 25 14:16:02.029294 amazon-ssm-agent[1822]: 2024/06/25 14:16:02 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:02.030460 amazon-ssm-agent[1822]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:02.030823 amazon-ssm-agent[1822]: 2024/06/25 14:16:02 processing appconfig overrides Jun 25 14:16:02.035245 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 14:16:02.048628 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 14:16:02.144252 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO https_proxy: Jun 25 14:16:02.264722 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO http_proxy: Jun 25 14:16:02.289311 locksmithd[1826]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 14:16:02.377369 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO no_proxy: Jun 25 14:16:02.496943 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO Checking if agent identity type OnPrem can be assumed Jun 25 14:16:02.521341 dbus-daemon[1778]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 25 14:16:02.521630 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 25 14:16:02.525507 dbus-daemon[1778]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1810 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 25 14:16:02.535745 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1841) Jun 25 14:16:02.556756 systemd[1]: Starting polkit.service - Authorization Manager... Jun 25 14:16:02.598827 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO Checking if agent identity type EC2 can be assumed Jun 25 14:16:02.611596 polkitd[1880]: Started polkitd version 121 Jun 25 14:16:02.678036 polkitd[1880]: Loading rules from directory /etc/polkit-1/rules.d Jun 25 14:16:02.678215 polkitd[1880]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 25 14:16:02.698132 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO Agent will take identity from EC2 Jun 25 14:16:02.706939 polkitd[1880]: Finished loading, compiling and executing 2 rules Jun 25 14:16:02.710959 dbus-daemon[1778]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 25 14:16:02.711285 systemd[1]: Started polkit.service - Authorization Manager. Jun 25 14:16:02.722254 polkitd[1880]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 25 14:16:02.787399 coreos-metadata[1852]: Jun 25 14:16:02.787 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 14:16:02.790843 coreos-metadata[1852]: Jun 25 14:16:02.790 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 25 14:16:02.791594 coreos-metadata[1852]: Jun 25 14:16:02.791 INFO Fetch successful Jun 25 14:16:02.791594 coreos-metadata[1852]: Jun 25 14:16:02.791 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 14:16:02.792783 coreos-metadata[1852]: Jun 25 14:16:02.792 INFO Fetch successful Jun 25 14:16:02.795598 unknown[1852]: wrote ssh authorized keys file for user: core Jun 25 14:16:02.802725 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 14:16:02.832496 update-ssh-keys[1928]: Updated "/home/core/.ssh/authorized_keys" Jun 25 14:16:02.833600 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 14:16:02.840069 systemd[1]: Finished sshkeys.service. Jun 25 14:16:02.856351 systemd-resolved[1750]: System hostname changed to 'ip-172-31-16-245'. Jun 25 14:16:02.856361 systemd-hostnamed[1810]: Hostname set to (transient) Jun 25 14:16:02.902100 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 14:16:02.988590 coreos-metadata[1777]: Jun 25 14:16:02.986 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 14:16:02.990533 coreos-metadata[1777]: Jun 25 14:16:02.990 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 25 14:16:02.991046 coreos-metadata[1777]: Jun 25 14:16:02.990 INFO Fetch successful Jun 25 14:16:02.991544 coreos-metadata[1777]: Jun 25 14:16:02.991 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 25 14:16:02.992121 coreos-metadata[1777]: Jun 25 14:16:02.991 INFO Fetch successful Jun 25 14:16:02.992561 coreos-metadata[1777]: Jun 25 14:16:02.992 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 25 14:16:02.992895 coreos-metadata[1777]: Jun 25 14:16:02.992 INFO Fetch successful Jun 25 14:16:03.000849 coreos-metadata[1777]: Jun 25 14:16:02.992 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 25 14:16:03.001202 coreos-metadata[1777]: Jun 25 14:16:03.001 INFO Fetch successful Jun 25 14:16:03.001464 coreos-metadata[1777]: Jun 25 14:16:03.001 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 25 14:16:03.009236 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 14:16:03.013639 coreos-metadata[1777]: Jun 25 14:16:03.013 INFO Fetch failed with 404: resource not found Jun 25 14:16:03.014278 coreos-metadata[1777]: Jun 25 14:16:03.013 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 25 14:16:03.016572 coreos-metadata[1777]: Jun 25 14:16:03.016 INFO Fetch successful Jun 25 14:16:03.016572 coreos-metadata[1777]: Jun 25 14:16:03.016 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 25 14:16:03.017312 coreos-metadata[1777]: Jun 25 14:16:03.017 INFO Fetch successful Jun 25 14:16:03.017312 coreos-metadata[1777]: Jun 25 14:16:03.017 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 25 14:16:03.020726 coreos-metadata[1777]: Jun 25 14:16:03.020 INFO Fetch successful Jun 25 14:16:03.020726 coreos-metadata[1777]: Jun 25 14:16:03.020 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 25 14:16:03.023201 coreos-metadata[1777]: Jun 25 14:16:03.023 INFO Fetch successful Jun 25 14:16:03.023201 coreos-metadata[1777]: Jun 25 14:16:03.023 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 25 14:16:03.024070 coreos-metadata[1777]: Jun 25 14:16:03.024 INFO Fetch successful Jun 25 14:16:03.064644 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 14:16:03.067479 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 14:16:03.108541 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jun 25 14:16:03.165438 containerd[1804]: time="2024-06-25T14:16:03.165284480Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 14:16:03.220423 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jun 25 14:16:03.319859 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO [amazon-ssm-agent] Starting Core Agent Jun 25 14:16:03.326134 containerd[1804]: time="2024-06-25T14:16:03.326051687Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 14:16:03.326134 containerd[1804]: time="2024-06-25T14:16:03.326136007Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:03.329610 containerd[1804]: time="2024-06-25T14:16:03.329534222Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:16:03.331777 containerd[1804]: time="2024-06-25T14:16:03.331720553Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:03.332423 containerd[1804]: time="2024-06-25T14:16:03.332356899Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:16:03.332592 containerd[1804]: time="2024-06-25T14:16:03.332558985Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 14:16:03.332953 containerd[1804]: time="2024-06-25T14:16:03.332918049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:03.333936 containerd[1804]: time="2024-06-25T14:16:03.333884705Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:16:03.334141 containerd[1804]: time="2024-06-25T14:16:03.334107540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:03.335305 containerd[1804]: time="2024-06-25T14:16:03.335255259Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:03.338541 containerd[1804]: time="2024-06-25T14:16:03.338482799Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:03.340890 containerd[1804]: time="2024-06-25T14:16:03.340828692Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 14:16:03.341152 containerd[1804]: time="2024-06-25T14:16:03.341116158Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:03.341557 containerd[1804]: time="2024-06-25T14:16:03.341511694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:16:03.341759 containerd[1804]: time="2024-06-25T14:16:03.341729205Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 14:16:03.342033 containerd[1804]: time="2024-06-25T14:16:03.341997208Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 14:16:03.342163 containerd[1804]: time="2024-06-25T14:16:03.342133770Z" level=info msg="metadata content store policy set" policy=shared Jun 25 14:16:03.367197 containerd[1804]: time="2024-06-25T14:16:03.367139845Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 14:16:03.367613 containerd[1804]: time="2024-06-25T14:16:03.367557989Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 14:16:03.368026 containerd[1804]: time="2024-06-25T14:16:03.367964293Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 14:16:03.368419 containerd[1804]: time="2024-06-25T14:16:03.368351325Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 14:16:03.368755 containerd[1804]: time="2024-06-25T14:16:03.368706422Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 14:16:03.368915 containerd[1804]: time="2024-06-25T14:16:03.368885877Z" level=info msg="NRI interface is disabled by configuration." Jun 25 14:16:03.369226 containerd[1804]: time="2024-06-25T14:16:03.369190781Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 14:16:03.369752 containerd[1804]: time="2024-06-25T14:16:03.369675544Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 14:16:03.370356 containerd[1804]: time="2024-06-25T14:16:03.370298013Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 14:16:03.370544 containerd[1804]: time="2024-06-25T14:16:03.370510176Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 14:16:03.370816 containerd[1804]: time="2024-06-25T14:16:03.370763159Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 14:16:03.370974 containerd[1804]: time="2024-06-25T14:16:03.370942852Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 14:16:03.371401 containerd[1804]: time="2024-06-25T14:16:03.371349692Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 14:16:03.371581 containerd[1804]: time="2024-06-25T14:16:03.371549181Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 14:16:03.371781 containerd[1804]: time="2024-06-25T14:16:03.371749945Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 14:16:03.371932 containerd[1804]: time="2024-06-25T14:16:03.371901825Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 14:16:03.372083 containerd[1804]: time="2024-06-25T14:16:03.372052991Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 14:16:03.372347 containerd[1804]: time="2024-06-25T14:16:03.372312953Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 14:16:03.372835 containerd[1804]: time="2024-06-25T14:16:03.372783554Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 14:16:03.373262 containerd[1804]: time="2024-06-25T14:16:03.373222698Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 14:16:03.373987 containerd[1804]: time="2024-06-25T14:16:03.373923388Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 14:16:03.378978 containerd[1804]: time="2024-06-25T14:16:03.378912170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.379557 containerd[1804]: time="2024-06-25T14:16:03.379490842Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 14:16:03.380043 containerd[1804]: time="2024-06-25T14:16:03.380003215Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 14:16:03.381103 containerd[1804]: time="2024-06-25T14:16:03.381045472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.382328 containerd[1804]: time="2024-06-25T14:16:03.382264968Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.382645 containerd[1804]: time="2024-06-25T14:16:03.382612097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.383048 containerd[1804]: time="2024-06-25T14:16:03.383016757Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.383196 containerd[1804]: time="2024-06-25T14:16:03.383167649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.383336 containerd[1804]: time="2024-06-25T14:16:03.383307904Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.383482 containerd[1804]: time="2024-06-25T14:16:03.383453685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.383631 containerd[1804]: time="2024-06-25T14:16:03.383602969Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.383813 containerd[1804]: time="2024-06-25T14:16:03.383784174Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 14:16:03.385072 containerd[1804]: time="2024-06-25T14:16:03.385037558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.385236 containerd[1804]: time="2024-06-25T14:16:03.385207293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.385374 containerd[1804]: time="2024-06-25T14:16:03.385346666Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.385524 containerd[1804]: time="2024-06-25T14:16:03.385495688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.385751 containerd[1804]: time="2024-06-25T14:16:03.385722716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.387734 containerd[1804]: time="2024-06-25T14:16:03.387628692Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.388130 containerd[1804]: time="2024-06-25T14:16:03.388086583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.388415 containerd[1804]: time="2024-06-25T14:16:03.388379695Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 14:16:03.389266 containerd[1804]: time="2024-06-25T14:16:03.389116441Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 14:16:03.389988 containerd[1804]: time="2024-06-25T14:16:03.389944319Z" level=info msg="Connect containerd service" Jun 25 14:16:03.390202 containerd[1804]: time="2024-06-25T14:16:03.390152766Z" level=info msg="using legacy CRI server" Jun 25 14:16:03.390335 containerd[1804]: time="2024-06-25T14:16:03.390307159Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 14:16:03.390565 containerd[1804]: time="2024-06-25T14:16:03.390536283Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 14:16:03.392336 containerd[1804]: time="2024-06-25T14:16:03.392263257Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 14:16:03.398583 containerd[1804]: time="2024-06-25T14:16:03.398426356Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 14:16:03.399832 containerd[1804]: time="2024-06-25T14:16:03.399757722Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 14:16:03.402341 containerd[1804]: time="2024-06-25T14:16:03.398497073Z" level=info msg="Start subscribing containerd event" Jun 25 14:16:03.402974 containerd[1804]: time="2024-06-25T14:16:03.402245681Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 14:16:03.403383 containerd[1804]: time="2024-06-25T14:16:03.403346398Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 14:16:03.404446 containerd[1804]: time="2024-06-25T14:16:03.404407034Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 14:16:03.405460 containerd[1804]: time="2024-06-25T14:16:03.405404088Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 14:16:03.406246 containerd[1804]: time="2024-06-25T14:16:03.406201998Z" level=info msg="Start recovering state" Jun 25 14:16:03.408885 containerd[1804]: time="2024-06-25T14:16:03.408732468Z" level=info msg="Start event monitor" Jun 25 14:16:03.412736 containerd[1804]: time="2024-06-25T14:16:03.412703220Z" level=info msg="Start snapshots syncer" Jun 25 14:16:03.413030 containerd[1804]: time="2024-06-25T14:16:03.412982908Z" level=info msg="Start cni network conf syncer for default" Jun 25 14:16:03.413293 containerd[1804]: time="2024-06-25T14:16:03.413264335Z" level=info msg="Start streaming server" Jun 25 14:16:03.414027 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 14:16:03.416975 containerd[1804]: time="2024-06-25T14:16:03.413908411Z" level=info msg="containerd successfully booted in 0.289013s" Jun 25 14:16:03.420007 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jun 25 14:16:03.520490 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO [Registrar] Starting registrar module Jun 25 14:16:03.626100 amazon-ssm-agent[1822]: 2024-06-25 14:16:02 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jun 25 14:16:03.963049 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:04.269102 tar[1801]: linux-arm64/LICENSE Jun 25 14:16:04.269629 tar[1801]: linux-arm64/README.md Jun 25 14:16:04.296395 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 14:16:05.005028 kubelet[1985]: E0625 14:16:05.004957 1985 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:16:05.011285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:16:05.011724 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:16:05.012297 systemd[1]: kubelet.service: Consumed 1.420s CPU time. Jun 25 14:16:05.033692 amazon-ssm-agent[1822]: 2024-06-25 14:16:05 INFO [EC2Identity] EC2 registration was successful. Jun 25 14:16:05.066448 amazon-ssm-agent[1822]: 2024-06-25 14:16:05 INFO [CredentialRefresher] credentialRefresher has started Jun 25 14:16:05.066448 amazon-ssm-agent[1822]: 2024-06-25 14:16:05 INFO [CredentialRefresher] Starting credentials refresher loop Jun 25 14:16:05.066703 amazon-ssm-agent[1822]: 2024-06-25 14:16:05 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 25 14:16:05.133841 amazon-ssm-agent[1822]: 2024-06-25 14:16:05 INFO [CredentialRefresher] Next credential rotation will be in 31.416657176816667 minutes Jun 25 14:16:06.095018 amazon-ssm-agent[1822]: 2024-06-25 14:16:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 25 14:16:06.195868 amazon-ssm-agent[1822]: 2024-06-25 14:16:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:1994) started Jun 25 14:16:06.297291 amazon-ssm-agent[1822]: 2024-06-25 14:16:06 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 25 14:16:06.952352 sshd_keygen[1819]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 14:16:06.994488 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 14:16:07.005520 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 14:16:07.018894 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 14:16:07.019281 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 14:16:07.030592 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 14:16:07.050571 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 14:16:07.061861 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 14:16:07.067941 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 14:16:07.070610 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 14:16:07.072932 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 14:16:07.082469 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 14:16:07.100977 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 14:16:07.101404 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 14:16:07.104002 systemd[1]: Startup finished in 1.102s (kernel) + 7.815s (initrd) + 11.554s (userspace) = 20.472s. Jun 25 14:16:09.843297 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 14:16:09.853339 systemd[1]: Started sshd@0-172.31.16.245:22-139.178.68.195:49540.service - OpenSSH per-connection server daemon (139.178.68.195:49540). Jun 25 14:16:10.062831 sshd[2019]: Accepted publickey for core from 139.178.68.195 port 49540 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:10.066079 sshd[2019]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:10.082090 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 14:16:10.094929 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 14:16:10.101964 systemd-logind[1794]: New session 1 of user core. Jun 25 14:16:10.123292 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 14:16:10.131604 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 14:16:10.137299 (systemd)[2022]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:10.325806 systemd[2022]: Queued start job for default target default.target. Jun 25 14:16:10.334698 systemd[2022]: Reached target paths.target - Paths. Jun 25 14:16:10.334756 systemd[2022]: Reached target sockets.target - Sockets. Jun 25 14:16:10.334788 systemd[2022]: Reached target timers.target - Timers. Jun 25 14:16:10.334817 systemd[2022]: Reached target basic.target - Basic System. Jun 25 14:16:10.335030 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 14:16:10.338312 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 14:16:10.339360 systemd[2022]: Reached target default.target - Main User Target. Jun 25 14:16:10.339826 systemd[2022]: Startup finished in 188ms. Jun 25 14:16:10.500440 systemd[1]: Started sshd@1-172.31.16.245:22-139.178.68.195:49544.service - OpenSSH per-connection server daemon (139.178.68.195:49544). Jun 25 14:16:10.666821 sshd[2031]: Accepted publickey for core from 139.178.68.195 port 49544 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:10.670279 sshd[2031]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:10.680759 systemd-logind[1794]: New session 2 of user core. Jun 25 14:16:10.687023 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 14:16:10.819906 sshd[2031]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:10.826047 systemd[1]: sshd@1-172.31.16.245:22-139.178.68.195:49544.service: Deactivated successfully. Jun 25 14:16:10.827362 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 14:16:10.828866 systemd-logind[1794]: Session 2 logged out. Waiting for processes to exit. Jun 25 14:16:10.830340 systemd-logind[1794]: Removed session 2. Jun 25 14:16:10.864375 systemd[1]: Started sshd@2-172.31.16.245:22-139.178.68.195:49552.service - OpenSSH per-connection server daemon (139.178.68.195:49552). Jun 25 14:16:11.037198 sshd[2037]: Accepted publickey for core from 139.178.68.195 port 49552 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:11.040265 sshd[2037]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:11.047790 systemd-logind[1794]: New session 3 of user core. Jun 25 14:16:11.058961 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 14:16:11.184086 sshd[2037]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:11.189713 systemd-logind[1794]: Session 3 logged out. Waiting for processes to exit. Jun 25 14:16:11.190131 systemd[1]: sshd@2-172.31.16.245:22-139.178.68.195:49552.service: Deactivated successfully. Jun 25 14:16:11.191327 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 14:16:11.192704 systemd-logind[1794]: Removed session 3. Jun 25 14:16:11.229404 systemd[1]: Started sshd@3-172.31.16.245:22-139.178.68.195:49568.service - OpenSSH per-connection server daemon (139.178.68.195:49568). Jun 25 14:16:11.395584 sshd[2043]: Accepted publickey for core from 139.178.68.195 port 49568 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:11.399153 sshd[2043]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:11.409009 systemd-logind[1794]: New session 4 of user core. Jun 25 14:16:11.416062 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 14:16:11.551454 sshd[2043]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:11.557250 systemd-logind[1794]: Session 4 logged out. Waiting for processes to exit. Jun 25 14:16:11.557743 systemd[1]: sshd@3-172.31.16.245:22-139.178.68.195:49568.service: Deactivated successfully. Jun 25 14:16:11.559264 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 14:16:11.561037 systemd-logind[1794]: Removed session 4. Jun 25 14:16:11.591751 systemd[1]: Started sshd@4-172.31.16.245:22-139.178.68.195:49574.service - OpenSSH per-connection server daemon (139.178.68.195:49574). Jun 25 14:16:11.765066 sshd[2049]: Accepted publickey for core from 139.178.68.195 port 49574 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:11.768405 sshd[2049]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:11.778393 systemd-logind[1794]: New session 5 of user core. Jun 25 14:16:11.785008 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 14:16:11.942884 sudo[2052]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 14:16:11.945010 sudo[2052]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:16:11.962253 sudo[2052]: pam_unix(sudo:session): session closed for user root Jun 25 14:16:11.987902 sshd[2049]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:11.994973 systemd[1]: sshd@4-172.31.16.245:22-139.178.68.195:49574.service: Deactivated successfully. Jun 25 14:16:11.996598 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 14:16:11.998206 systemd-logind[1794]: Session 5 logged out. Waiting for processes to exit. Jun 25 14:16:12.001589 systemd-logind[1794]: Removed session 5. Jun 25 14:16:12.032861 systemd[1]: Started sshd@5-172.31.16.245:22-139.178.68.195:49588.service - OpenSSH per-connection server daemon (139.178.68.195:49588). Jun 25 14:16:12.210063 sshd[2056]: Accepted publickey for core from 139.178.68.195 port 49588 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:12.213787 sshd[2056]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:12.223496 systemd-logind[1794]: New session 6 of user core. Jun 25 14:16:12.230088 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 14:16:12.348292 sudo[2061]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 14:16:12.349774 sudo[2061]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:16:12.357741 sudo[2061]: pam_unix(sudo:session): session closed for user root Jun 25 14:16:12.370786 sudo[2060]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 14:16:12.371493 sudo[2060]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:16:12.398356 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 14:16:12.399000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:16:12.402509 kernel: kauditd_printk_skb: 108 callbacks suppressed Jun 25 14:16:12.402619 kernel: audit: type=1305 audit(1719324972.399:200): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:16:12.399000 audit[2064]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffddcfcfc0 a2=420 a3=0 items=0 ppid=1 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:12.405630 auditctl[2064]: No rules Jun 25 14:16:12.410185 kernel: audit: type=1300 audit(1719324972.399:200): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffddcfcfc0 a2=420 a3=0 items=0 ppid=1 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:12.410885 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 14:16:12.411335 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 14:16:12.399000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:16:12.413359 kernel: audit: type=1327 audit(1719324972.399:200): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:16:12.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.417714 kernel: audit: type=1131 audit(1719324972.409:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.424170 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:16:12.480780 augenrules[2081]: No rules Jun 25 14:16:12.483018 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:16:12.481000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.486077 sudo[2060]: pam_unix(sudo:session): session closed for user root Jun 25 14:16:12.484000 audit[2060]: USER_END pid=2060 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.493471 kernel: audit: type=1130 audit(1719324972.481:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.493617 kernel: audit: type=1106 audit(1719324972.484:203): pid=2060 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.484000 audit[2060]: CRED_DISP pid=2060 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.497688 kernel: audit: type=1104 audit(1719324972.484:204): pid=2060 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.513517 sshd[2056]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:12.515000 audit[2056]: USER_END pid=2056 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:12.518695 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 14:16:12.520042 systemd[1]: sshd@5-172.31.16.245:22-139.178.68.195:49588.service: Deactivated successfully. Jun 25 14:16:12.515000 audit[2056]: CRED_DISP pid=2056 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:12.526521 kernel: audit: type=1106 audit(1719324972.515:205): pid=2056 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:12.526696 kernel: audit: type=1104 audit(1719324972.515:206): pid=2056 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:12.526756 kernel: audit: type=1131 audit(1719324972.519:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.16.245:22-139.178.68.195:49588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.16.245:22-139.178.68.195:49588 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.531165 systemd-logind[1794]: Session 6 logged out. Waiting for processes to exit. Jun 25 14:16:12.533409 systemd-logind[1794]: Removed session 6. Jun 25 14:16:12.557847 systemd[1]: Started sshd@6-172.31.16.245:22-139.178.68.195:49600.service - OpenSSH per-connection server daemon (139.178.68.195:49600). Jun 25 14:16:12.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.245:22-139.178.68.195:49600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.734000 audit[2087]: USER_ACCT pid=2087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:12.735273 sshd[2087]: Accepted publickey for core from 139.178.68.195 port 49600 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:12.737000 audit[2087]: CRED_ACQ pid=2087 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:12.737000 audit[2087]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff7c7f090 a2=3 a3=1 items=0 ppid=1 pid=2087 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:12.737000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:12.739401 sshd[2087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:12.748433 systemd-logind[1794]: New session 7 of user core. Jun 25 14:16:12.755053 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 14:16:12.766000 audit[2087]: USER_START pid=2087 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:12.769000 audit[2089]: CRED_ACQ pid=2089 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:12.868000 audit[2090]: USER_ACCT pid=2090 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.870235 sudo[2090]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 14:16:12.868000 audit[2090]: CRED_REFR pid=2090 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:12.870978 sudo[2090]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:16:12.873000 audit[2090]: USER_START pid=2090 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:13.105563 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 14:16:13.581481 dockerd[2100]: time="2024-06-25T14:16:13.581281260Z" level=info msg="Starting up" Jun 25 14:16:13.631236 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2929217118-merged.mount: Deactivated successfully. Jun 25 14:16:13.664565 systemd[1]: var-lib-docker-metacopy\x2dcheck318685979-merged.mount: Deactivated successfully. Jun 25 14:16:13.683406 dockerd[2100]: time="2024-06-25T14:16:13.683321245Z" level=info msg="Loading containers: start." Jun 25 14:16:13.811000 audit[2132]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2132 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:13.811000 audit[2132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffee094aa0 a2=0 a3=1 items=0 ppid=2100 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:13.811000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 14:16:13.817000 audit[2134]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:13.817000 audit[2134]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc2434250 a2=0 a3=1 items=0 ppid=2100 pid=2134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:13.817000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 14:16:13.823000 audit[2136]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2136 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:13.823000 audit[2136]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcd90a9b0 a2=0 a3=1 items=0 ppid=2100 pid=2136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:13.823000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:16:13.828000 audit[2138]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:13.828000 audit[2138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff09b7280 a2=0 a3=1 items=0 ppid=2100 pid=2138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:13.828000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:16:13.836000 audit[2140]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:13.836000 audit[2140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffe60f4a0 a2=0 a3=1 items=0 ppid=2100 pid=2140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:13.836000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 14:16:13.842000 audit[2142]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2142 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:13.842000 audit[2142]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcc778e40 a2=0 a3=1 items=0 ppid=2100 pid=2142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:13.842000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 14:16:13.861000 audit[2144]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:13.861000 audit[2144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe9b86ea0 a2=0 a3=1 items=0 ppid=2100 pid=2144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:13.861000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 14:16:13.867000 audit[2146]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:13.867000 audit[2146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd1eb31a0 a2=0 a3=1 items=0 ppid=2100 pid=2146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:13.867000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 14:16:13.872000 audit[2148]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2148 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:13.872000 audit[2148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=fffff8e2f490 a2=0 a3=1 items=0 ppid=2100 pid=2148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:13.872000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:16:13.895000 audit[2152]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:13.895000 audit[2152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe16e8a20 a2=0 a3=1 items=0 ppid=2100 pid=2152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:13.895000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:16:13.898000 audit[2153]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:13.898000 audit[2153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd72984b0 a2=0 a3=1 items=0 ppid=2100 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:13.898000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:16:13.922707 kernel: Initializing XFRM netlink socket Jun 25 14:16:13.972530 (udev-worker)[2112]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:16:13.996000 audit[2161]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:13.996000 audit[2161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffc8e33770 a2=0 a3=1 items=0 ppid=2100 pid=2161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:13.996000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 14:16:14.017000 audit[2164]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:14.017000 audit[2164]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc32d7fd0 a2=0 a3=1 items=0 ppid=2100 pid=2164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:14.017000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 14:16:14.028000 audit[2168]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2168 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:14.028000 audit[2168]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd5bb6b60 a2=0 a3=1 items=0 ppid=2100 pid=2168 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:14.028000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 14:16:14.035000 audit[2170]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:14.035000 audit[2170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd3a53c40 a2=0 a3=1 items=0 ppid=2100 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:14.035000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 14:16:14.041000 audit[2172]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:14.041000 audit[2172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffeda4b090 a2=0 a3=1 items=0 ppid=2100 pid=2172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:14.041000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 14:16:14.049000 audit[2174]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:14.049000 audit[2174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffc7a7d2f0 a2=0 a3=1 items=0 ppid=2100 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:14.049000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 14:16:14.053000 audit[2176]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:14.053000 audit[2176]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=fffff64aa5a0 a2=0 a3=1 items=0 ppid=2100 pid=2176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:14.053000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 14:16:14.068000 audit[2179]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2179 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:14.068000 audit[2179]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=fffff48ecd90 a2=0 a3=1 items=0 ppid=2100 pid=2179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:14.068000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 14:16:14.073000 audit[2181]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:14.073000 audit[2181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffcf852fc0 a2=0 a3=1 items=0 ppid=2100 pid=2181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:14.073000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:16:14.079000 audit[2183]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:14.079000 audit[2183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffff5659930 a2=0 a3=1 items=0 ppid=2100 pid=2183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:14.079000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:16:14.084000 audit[2185]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2185 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:14.084000 audit[2185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc3357d30 a2=0 a3=1 items=0 ppid=2100 pid=2185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:14.084000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 14:16:14.086273 systemd-networkd[1524]: docker0: Link UP Jun 25 14:16:14.104000 audit[2189]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:14.104000 audit[2189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffca567040 a2=0 a3=1 items=0 ppid=2100 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:14.104000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:16:14.109000 audit[2190]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2190 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:14.109000 audit[2190]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc915c4a0 a2=0 a3=1 items=0 ppid=2100 pid=2190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:14.109000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:16:14.111495 dockerd[2100]: time="2024-06-25T14:16:14.111443250Z" level=info msg="Loading containers: done." Jun 25 14:16:14.298475 dockerd[2100]: time="2024-06-25T14:16:14.298403450Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 14:16:14.299167 dockerd[2100]: time="2024-06-25T14:16:14.299118379Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 14:16:14.299593 dockerd[2100]: time="2024-06-25T14:16:14.299557561Z" level=info msg="Daemon has completed initialization" Jun 25 14:16:14.354303 dockerd[2100]: time="2024-06-25T14:16:14.354143284Z" level=info msg="API listen on /run/docker.sock" Jun 25 14:16:14.354432 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 14:16:14.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:14.620766 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2925450423-merged.mount: Deactivated successfully. Jun 25 14:16:15.263205 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 14:16:15.263559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:15.263680 systemd[1]: kubelet.service: Consumed 1.420s CPU time. Jun 25 14:16:15.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:15.262000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:15.272248 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:16:15.860355 containerd[1804]: time="2024-06-25T14:16:15.860256372Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jun 25 14:16:16.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:16.173571 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:16.275111 kubelet[2236]: E0625 14:16:16.275054 2236 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:16:16.282633 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:16:16.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:16:16.282992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:16:16.595387 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4057066010.mount: Deactivated successfully. Jun 25 14:16:18.323764 containerd[1804]: time="2024-06-25T14:16:18.323702555Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:18.326561 containerd[1804]: time="2024-06-25T14:16:18.326496399Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.2: active requests=0, bytes read=29940430" Jun 25 14:16:18.326990 containerd[1804]: time="2024-06-25T14:16:18.326943720Z" level=info msg="ImageCreate event name:\"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:18.331620 containerd[1804]: time="2024-06-25T14:16:18.331566002Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:18.335436 containerd[1804]: time="2024-06-25T14:16:18.335366829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:18.338314 containerd[1804]: time="2024-06-25T14:16:18.338227351Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.2\" with image id \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d\", size \"29937230\" in 2.477871554s" Jun 25 14:16:18.338314 containerd[1804]: time="2024-06-25T14:16:18.338305788Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:84c601f3f72c87776cdcf77a73329d1f45297e43a92508b0f289fa2fcf8872a0\"" Jun 25 14:16:18.383025 containerd[1804]: time="2024-06-25T14:16:18.382948501Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jun 25 14:16:20.355586 containerd[1804]: time="2024-06-25T14:16:20.355528466Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:20.358354 containerd[1804]: time="2024-06-25T14:16:20.358268214Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.2: active requests=0, bytes read=26881371" Jun 25 14:16:20.359908 containerd[1804]: time="2024-06-25T14:16:20.359850128Z" level=info msg="ImageCreate event name:\"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:20.364003 containerd[1804]: time="2024-06-25T14:16:20.363948966Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:20.367958 containerd[1804]: time="2024-06-25T14:16:20.367900472Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:20.372376 containerd[1804]: time="2024-06-25T14:16:20.372289847Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.2\" with image id \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e\", size \"28368865\" in 1.98926203s" Jun 25 14:16:20.372599 containerd[1804]: time="2024-06-25T14:16:20.372563777Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e1dcc3400d3ea6a268c7ea6e66c3a196703770a8e346b695f54344ab53a47567\"" Jun 25 14:16:20.417293 containerd[1804]: time="2024-06-25T14:16:20.417210804Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jun 25 14:16:22.192055 containerd[1804]: time="2024-06-25T14:16:22.191975672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:22.194113 containerd[1804]: time="2024-06-25T14:16:22.194044317Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.2: active requests=0, bytes read=16155688" Jun 25 14:16:22.195745 containerd[1804]: time="2024-06-25T14:16:22.195700943Z" level=info msg="ImageCreate event name:\"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:22.202801 containerd[1804]: time="2024-06-25T14:16:22.202739406Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:22.206723 containerd[1804]: time="2024-06-25T14:16:22.206619267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:22.208668 containerd[1804]: time="2024-06-25T14:16:22.208582458Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.2\" with image id \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc\", size \"17643200\" in 1.791281299s" Jun 25 14:16:22.208668 containerd[1804]: time="2024-06-25T14:16:22.208646932Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:c7dd04b1bafeb51c650fde7f34ac0fdafa96030e77ea7a822135ff302d895dd5\"" Jun 25 14:16:22.246313 containerd[1804]: time="2024-06-25T14:16:22.246260128Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jun 25 14:16:23.609033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3765715073.mount: Deactivated successfully. Jun 25 14:16:24.248009 containerd[1804]: time="2024-06-25T14:16:24.247948521Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:24.250551 containerd[1804]: time="2024-06-25T14:16:24.250497150Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.2: active requests=0, bytes read=25634092" Jun 25 14:16:24.251864 containerd[1804]: time="2024-06-25T14:16:24.251802046Z" level=info msg="ImageCreate event name:\"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:24.255327 containerd[1804]: time="2024-06-25T14:16:24.255268096Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:24.259626 containerd[1804]: time="2024-06-25T14:16:24.259571588Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:24.262326 containerd[1804]: time="2024-06-25T14:16:24.262247505Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.2\" with image id \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\", repo tag \"registry.k8s.io/kube-proxy:v1.30.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec\", size \"25633111\" in 2.015735234s" Jun 25 14:16:24.262326 containerd[1804]: time="2024-06-25T14:16:24.262320132Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:66dbb96a9149f69913ff817f696be766014cacdffc2ce0889a76c81165415fae\"" Jun 25 14:16:24.302121 containerd[1804]: time="2024-06-25T14:16:24.302067415Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jun 25 14:16:24.935172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3740096578.mount: Deactivated successfully. Jun 25 14:16:26.263226 containerd[1804]: time="2024-06-25T14:16:26.263138688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:26.265705 containerd[1804]: time="2024-06-25T14:16:26.265598792Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485381" Jun 25 14:16:26.266411 containerd[1804]: time="2024-06-25T14:16:26.266321952Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:26.270395 containerd[1804]: time="2024-06-25T14:16:26.270345048Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:26.274457 containerd[1804]: time="2024-06-25T14:16:26.274390468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:26.279126 containerd[1804]: time="2024-06-25T14:16:26.279045879Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.976744423s" Jun 25 14:16:26.279287 containerd[1804]: time="2024-06-25T14:16:26.279119210Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Jun 25 14:16:26.319929 containerd[1804]: time="2024-06-25T14:16:26.319879102Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 14:16:26.537828 kernel: kauditd_printk_skb: 88 callbacks suppressed Jun 25 14:16:26.537965 kernel: audit: type=1130 audit(1719324986.534:246): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:26.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:26.534314 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 14:16:26.548994 kernel: audit: type=1131 audit(1719324986.534:247): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:26.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:26.534703 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:26.548271 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:16:26.950321 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4060966816.mount: Deactivated successfully. Jun 25 14:16:26.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:26.960365 containerd[1804]: time="2024-06-25T14:16:26.958622462Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:26.959286 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:26.963865 kernel: audit: type=1130 audit(1719324986.959:248): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:26.965433 containerd[1804]: time="2024-06-25T14:16:26.965349538Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jun 25 14:16:26.967719 containerd[1804]: time="2024-06-25T14:16:26.967599978Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:26.973391 containerd[1804]: time="2024-06-25T14:16:26.973311459Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:26.979240 containerd[1804]: time="2024-06-25T14:16:26.979166470Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:26.983262 containerd[1804]: time="2024-06-25T14:16:26.983165778Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 663.009966ms" Jun 25 14:16:26.983262 containerd[1804]: time="2024-06-25T14:16:26.983256551Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 14:16:27.036866 containerd[1804]: time="2024-06-25T14:16:27.036806133Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jun 25 14:16:27.065810 kubelet[2387]: E0625 14:16:27.065723 2387 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:16:27.069368 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:16:27.069000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:16:27.069736 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:16:27.074718 kernel: audit: type=1131 audit(1719324987.069:249): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:16:27.661211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2661511529.mount: Deactivated successfully. Jun 25 14:16:31.190057 containerd[1804]: time="2024-06-25T14:16:31.189997967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:31.192959 containerd[1804]: time="2024-06-25T14:16:31.192890490Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=66191472" Jun 25 14:16:31.194759 containerd[1804]: time="2024-06-25T14:16:31.194713258Z" level=info msg="ImageCreate event name:\"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:31.198166 containerd[1804]: time="2024-06-25T14:16:31.198118067Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:31.202418 containerd[1804]: time="2024-06-25T14:16:31.202365919Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:16:31.205115 containerd[1804]: time="2024-06-25T14:16:31.205043404Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"66189079\" in 4.168152743s" Jun 25 14:16:31.205254 containerd[1804]: time="2024-06-25T14:16:31.205112260Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Jun 25 14:16:32.889947 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 25 14:16:32.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:32.894722 kernel: audit: type=1131 audit(1719324992.888:250): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:32.909000 audit: BPF prog-id=46 op=UNLOAD Jun 25 14:16:32.909000 audit: BPF prog-id=45 op=UNLOAD Jun 25 14:16:32.913306 kernel: audit: type=1334 audit(1719324992.909:251): prog-id=46 op=UNLOAD Jun 25 14:16:32.913450 kernel: audit: type=1334 audit(1719324992.909:252): prog-id=45 op=UNLOAD Jun 25 14:16:32.913514 kernel: audit: type=1334 audit(1719324992.909:253): prog-id=44 op=UNLOAD Jun 25 14:16:32.909000 audit: BPF prog-id=44 op=UNLOAD Jun 25 14:16:37.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.321083 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jun 25 14:16:37.332004 kernel: audit: type=1130 audit(1719324997.319:254): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.332061 kernel: audit: type=1131 audit(1719324997.319:255): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.321430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:37.331254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:16:37.689976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:37.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.695707 kernel: audit: type=1130 audit(1719324997.688:256): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.781399 kubelet[2504]: E0625 14:16:37.781329 2504 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:16:37.784416 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:16:37.784794 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:16:37.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:16:37.789711 kernel: audit: type=1131 audit(1719324997.783:257): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:16:38.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:38.284737 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:38.283000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:38.292028 kernel: audit: type=1130 audit(1719324998.283:258): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:38.292140 kernel: audit: type=1131 audit(1719324998.283:259): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:38.303139 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:16:38.341870 systemd[1]: Reloading. Jun 25 14:16:38.781068 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:16:38.946000 audit: BPF prog-id=47 op=LOAD Jun 25 14:16:38.948000 audit: BPF prog-id=30 op=UNLOAD Jun 25 14:16:38.950722 kernel: audit: type=1334 audit(1719324998.946:260): prog-id=47 op=LOAD Jun 25 14:16:38.950816 kernel: audit: type=1334 audit(1719324998.948:261): prog-id=30 op=UNLOAD Jun 25 14:16:38.949000 audit: BPF prog-id=48 op=LOAD Jun 25 14:16:38.952261 kernel: audit: type=1334 audit(1719324998.949:262): prog-id=48 op=LOAD Jun 25 14:16:38.951000 audit: BPF prog-id=49 op=LOAD Jun 25 14:16:38.953587 kernel: audit: type=1334 audit(1719324998.951:263): prog-id=49 op=LOAD Jun 25 14:16:38.957197 kernel: audit: type=1334 audit(1719324998.952:264): prog-id=31 op=UNLOAD Jun 25 14:16:38.957310 kernel: audit: type=1334 audit(1719324998.952:265): prog-id=32 op=UNLOAD Jun 25 14:16:38.952000 audit: BPF prog-id=31 op=UNLOAD Jun 25 14:16:38.952000 audit: BPF prog-id=32 op=UNLOAD Jun 25 14:16:38.958710 kernel: audit: type=1334 audit(1719324998.952:266): prog-id=50 op=LOAD Jun 25 14:16:38.952000 audit: BPF prog-id=50 op=LOAD Jun 25 14:16:38.960080 kernel: audit: type=1334 audit(1719324998.952:267): prog-id=51 op=LOAD Jun 25 14:16:38.952000 audit: BPF prog-id=51 op=LOAD Jun 25 14:16:38.952000 audit: BPF prog-id=33 op=UNLOAD Jun 25 14:16:38.952000 audit: BPF prog-id=34 op=UNLOAD Jun 25 14:16:38.954000 audit: BPF prog-id=52 op=LOAD Jun 25 14:16:38.954000 audit: BPF prog-id=35 op=UNLOAD Jun 25 14:16:38.958000 audit: BPF prog-id=53 op=LOAD Jun 25 14:16:38.958000 audit: BPF prog-id=36 op=UNLOAD Jun 25 14:16:38.958000 audit: BPF prog-id=54 op=LOAD Jun 25 14:16:38.958000 audit: BPF prog-id=55 op=LOAD Jun 25 14:16:38.958000 audit: BPF prog-id=37 op=UNLOAD Jun 25 14:16:38.958000 audit: BPF prog-id=38 op=UNLOAD Jun 25 14:16:38.959000 audit: BPF prog-id=56 op=LOAD Jun 25 14:16:38.959000 audit: BPF prog-id=39 op=UNLOAD Jun 25 14:16:38.963000 audit: BPF prog-id=57 op=LOAD Jun 25 14:16:38.963000 audit: BPF prog-id=40 op=UNLOAD Jun 25 14:16:38.970000 audit: BPF prog-id=58 op=LOAD Jun 25 14:16:38.970000 audit: BPF prog-id=41 op=UNLOAD Jun 25 14:16:38.970000 audit: BPF prog-id=59 op=LOAD Jun 25 14:16:38.970000 audit: BPF prog-id=60 op=LOAD Jun 25 14:16:38.970000 audit: BPF prog-id=42 op=UNLOAD Jun 25 14:16:38.971000 audit: BPF prog-id=43 op=UNLOAD Jun 25 14:16:39.019008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:39.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:39.027847 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:16:39.029540 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:16:39.029994 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:39.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:39.036494 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:16:39.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:39.386370 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:39.480377 kubelet[2595]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:16:39.480976 kubelet[2595]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:16:39.481086 kubelet[2595]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:16:39.481313 kubelet[2595]: I0625 14:16:39.481258 2595 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:16:40.472057 kubelet[2595]: I0625 14:16:40.472011 2595 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 14:16:40.472267 kubelet[2595]: I0625 14:16:40.472243 2595 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:16:40.472839 kubelet[2595]: I0625 14:16:40.472801 2595 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 14:16:40.502863 kubelet[2595]: E0625 14:16:40.502820 2595 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.245:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:40.503772 kubelet[2595]: I0625 14:16:40.503728 2595 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:16:40.523310 kubelet[2595]: I0625 14:16:40.523234 2595 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:16:40.526077 kubelet[2595]: I0625 14:16:40.525954 2595 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:16:40.526701 kubelet[2595]: I0625 14:16:40.526070 2595 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-245","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:16:40.527029 kubelet[2595]: I0625 14:16:40.526797 2595 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:16:40.527029 kubelet[2595]: I0625 14:16:40.526834 2595 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:16:40.527234 kubelet[2595]: I0625 14:16:40.527161 2595 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:16:40.529332 kubelet[2595]: I0625 14:16:40.529276 2595 kubelet.go:400] "Attempting to sync node with API server" Jun 25 14:16:40.529332 kubelet[2595]: I0625 14:16:40.529331 2595 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:16:40.529592 kubelet[2595]: I0625 14:16:40.529460 2595 kubelet.go:312] "Adding apiserver pod source" Jun 25 14:16:40.529592 kubelet[2595]: I0625 14:16:40.529538 2595 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:16:40.530959 kubelet[2595]: I0625 14:16:40.530908 2595 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:16:40.531768 kubelet[2595]: I0625 14:16:40.531718 2595 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 14:16:40.532136 kubelet[2595]: W0625 14:16:40.532101 2595 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 14:16:40.533617 kubelet[2595]: I0625 14:16:40.533560 2595 server.go:1264] "Started kubelet" Jun 25 14:16:40.538561 kubelet[2595]: I0625 14:16:40.538518 2595 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:16:40.544000 audit[2605]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2605 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:40.544000 audit[2605]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff4e3b610 a2=0 a3=1 items=0 ppid=2595 pid=2605 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:40.544000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:16:40.547332 kubelet[2595]: W0625 14:16:40.547219 2595 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.245:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-245&limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:40.547332 kubelet[2595]: E0625 14:16:40.547338 2595 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.245:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-245&limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:40.548844 kubelet[2595]: I0625 14:16:40.548766 2595 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:16:40.552424 kubelet[2595]: I0625 14:16:40.550643 2595 server.go:455] "Adding debug handlers to kubelet server" Jun 25 14:16:40.552424 kubelet[2595]: I0625 14:16:40.552240 2595 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 14:16:40.552643 kubelet[2595]: I0625 14:16:40.552610 2595 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:16:40.551000 audit[2606]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:40.551000 audit[2606]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeb28b4e0 a2=0 a3=1 items=0 ppid=2595 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:40.551000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:16:40.553608 kubelet[2595]: W0625 14:16:40.548565 2595 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.245:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:40.553608 kubelet[2595]: E0625 14:16:40.553361 2595 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.245:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:40.558346 kubelet[2595]: I0625 14:16:40.558294 2595 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:16:40.560740 kubelet[2595]: I0625 14:16:40.560654 2595 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 14:16:40.560935 kubelet[2595]: I0625 14:16:40.560903 2595 reconciler.go:26] "Reconciler: start to sync state" Jun 25 14:16:40.562526 kubelet[2595]: W0625 14:16:40.562454 2595 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.245:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:40.562992 kubelet[2595]: E0625 14:16:40.562925 2595 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.245:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:40.562992 kubelet[2595]: E0625 14:16:40.562822 2595 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-245?timeout=10s\": dial tcp 172.31.16.245:6443: connect: connection refused" interval="200ms" Jun 25 14:16:40.563566 kubelet[2595]: I0625 14:16:40.563510 2595 factory.go:221] Registration of the systemd container factory successfully Jun 25 14:16:40.563858 kubelet[2595]: I0625 14:16:40.563804 2595 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 14:16:40.564785 kubelet[2595]: E0625 14:16:40.560600 2595 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.16.245:6443/api/v1/namespaces/default/events\": dial tcp 172.31.16.245:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-16-245.17dc44fa863fab57 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-16-245,UID:ip-172-31-16-245,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-16-245,},FirstTimestamp:2024-06-25 14:16:40.533519191 +0000 UTC m=+1.136134052,LastTimestamp:2024-06-25 14:16:40.533519191 +0000 UTC m=+1.136134052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-16-245,}" Jun 25 14:16:40.566024 kubelet[2595]: E0625 14:16:40.565978 2595 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:16:40.568268 kubelet[2595]: I0625 14:16:40.568224 2595 factory.go:221] Registration of the containerd container factory successfully Jun 25 14:16:40.583000 audit[2611]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2611 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:40.583000 audit[2611]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffca29fbe0 a2=0 a3=1 items=0 ppid=2595 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:40.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:16:40.597425 kubelet[2595]: I0625 14:16:40.597391 2595 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:16:40.597805 kubelet[2595]: I0625 14:16:40.597780 2595 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:16:40.597969 kubelet[2595]: I0625 14:16:40.597948 2595 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:16:40.598000 audit[2613]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2613 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:40.598000 audit[2613]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffce5591a0 a2=0 a3=1 items=0 ppid=2595 pid=2613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:40.598000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:16:40.613000 audit[2616]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2616 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:40.613000 audit[2616]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffec4d5150 a2=0 a3=1 items=0 ppid=2595 pid=2616 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:40.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 14:16:40.618049 kubelet[2595]: I0625 14:16:40.615698 2595 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:16:40.619972 kubelet[2595]: I0625 14:16:40.619916 2595 policy_none.go:49] "None policy: Start" Jun 25 14:16:40.621497 kubelet[2595]: I0625 14:16:40.621443 2595 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 14:16:40.621727 kubelet[2595]: I0625 14:16:40.621512 2595 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:16:40.620000 audit[2620]: NETFILTER_CFG table=mangle:31 family=2 entries=1 op=nft_register_chain pid=2620 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:40.620000 audit[2620]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd3ba5860 a2=0 a3=1 items=0 ppid=2595 pid=2620 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:40.620000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:16:40.621000 audit[2618]: NETFILTER_CFG table=mangle:32 family=10 entries=2 op=nft_register_chain pid=2618 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:16:40.621000 audit[2618]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffec804e50 a2=0 a3=1 items=0 ppid=2595 pid=2618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:40.621000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:16:40.623474 kubelet[2595]: I0625 14:16:40.623426 2595 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:16:40.623766 kubelet[2595]: I0625 14:16:40.623734 2595 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:16:40.623984 kubelet[2595]: I0625 14:16:40.623953 2595 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 14:16:40.625105 kubelet[2595]: E0625 14:16:40.625033 2595 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:16:40.626055 kubelet[2595]: W0625 14:16:40.625956 2595 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.245:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:40.626055 kubelet[2595]: E0625 14:16:40.626056 2595 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.245:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:40.626000 audit[2621]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2621 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:16:40.626000 audit[2621]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe551a870 a2=0 a3=1 items=0 ppid=2595 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:40.626000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:16:40.629000 audit[2624]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=2624 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:16:40.629000 audit[2624]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffe58ad3b0 a2=0 a3=1 items=0 ppid=2595 pid=2624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:40.629000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:16:40.630000 audit[2622]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=2622 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:40.630000 audit[2622]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf5c7530 a2=0 a3=1 items=0 ppid=2595 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:40.630000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:16:40.633000 audit[2626]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=2626 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:16:40.633000 audit[2626]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffde081fd0 a2=0 a3=1 items=0 ppid=2595 pid=2626 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:40.633000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:16:40.633000 audit[2625]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=2625 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:16:40.633000 audit[2625]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc4acdb20 a2=0 a3=1 items=0 ppid=2595 pid=2625 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:40.633000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:16:40.644334 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 25 14:16:40.661694 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 25 14:16:40.665632 kubelet[2595]: I0625 14:16:40.665557 2595 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-245" Jun 25 14:16:40.666475 kubelet[2595]: E0625 14:16:40.666417 2595 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.245:6443/api/v1/nodes\": dial tcp 172.31.16.245:6443: connect: connection refused" node="ip-172-31-16-245" Jun 25 14:16:40.670926 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 25 14:16:40.681617 kubelet[2595]: I0625 14:16:40.681546 2595 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:16:40.682024 kubelet[2595]: I0625 14:16:40.681952 2595 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 14:16:40.682199 kubelet[2595]: I0625 14:16:40.682148 2595 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:16:40.687609 kubelet[2595]: E0625 14:16:40.687393 2595 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-245\" not found" Jun 25 14:16:40.728254 kubelet[2595]: I0625 14:16:40.725879 2595 topology_manager.go:215] "Topology Admit Handler" podUID="649c8310f37d09ac34a3aeb81b0b8f5b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-245" Jun 25 14:16:40.729492 kubelet[2595]: I0625 14:16:40.729431 2595 topology_manager.go:215] "Topology Admit Handler" podUID="b5aad3a82c91e0d331534ad830377090" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-245" Jun 25 14:16:40.732422 kubelet[2595]: I0625 14:16:40.732361 2595 topology_manager.go:215] "Topology Admit Handler" podUID="747038e66efc595a67c79c0bd4d9db23" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-245" Jun 25 14:16:40.748639 systemd[1]: Created slice kubepods-burstable-pod649c8310f37d09ac34a3aeb81b0b8f5b.slice - libcontainer container kubepods-burstable-pod649c8310f37d09ac34a3aeb81b0b8f5b.slice. Jun 25 14:16:40.761626 systemd[1]: Created slice kubepods-burstable-pod747038e66efc595a67c79c0bd4d9db23.slice - libcontainer container kubepods-burstable-pod747038e66efc595a67c79c0bd4d9db23.slice. Jun 25 14:16:40.762707 kubelet[2595]: I0625 14:16:40.762466 2595 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5aad3a82c91e0d331534ad830377090-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-245\" (UID: \"b5aad3a82c91e0d331534ad830377090\") " pod="kube-system/kube-controller-manager-ip-172-31-16-245" Jun 25 14:16:40.762881 kubelet[2595]: I0625 14:16:40.762810 2595 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/747038e66efc595a67c79c0bd4d9db23-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-245\" (UID: \"747038e66efc595a67c79c0bd4d9db23\") " pod="kube-system/kube-scheduler-ip-172-31-16-245" Jun 25 14:16:40.762956 kubelet[2595]: I0625 14:16:40.762901 2595 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/649c8310f37d09ac34a3aeb81b0b8f5b-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-245\" (UID: \"649c8310f37d09ac34a3aeb81b0b8f5b\") " pod="kube-system/kube-apiserver-ip-172-31-16-245" Jun 25 14:16:40.763061 kubelet[2595]: I0625 14:16:40.762946 2595 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/649c8310f37d09ac34a3aeb81b0b8f5b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-245\" (UID: \"649c8310f37d09ac34a3aeb81b0b8f5b\") " pod="kube-system/kube-apiserver-ip-172-31-16-245" Jun 25 14:16:40.763061 kubelet[2595]: I0625 14:16:40.763044 2595 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b5aad3a82c91e0d331534ad830377090-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-245\" (UID: \"b5aad3a82c91e0d331534ad830377090\") " pod="kube-system/kube-controller-manager-ip-172-31-16-245" Jun 25 14:16:40.763183 kubelet[2595]: I0625 14:16:40.763112 2595 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5aad3a82c91e0d331534ad830377090-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-245\" (UID: \"b5aad3a82c91e0d331534ad830377090\") " pod="kube-system/kube-controller-manager-ip-172-31-16-245" Jun 25 14:16:40.763183 kubelet[2595]: I0625 14:16:40.763153 2595 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/649c8310f37d09ac34a3aeb81b0b8f5b-ca-certs\") pod \"kube-apiserver-ip-172-31-16-245\" (UID: \"649c8310f37d09ac34a3aeb81b0b8f5b\") " pod="kube-system/kube-apiserver-ip-172-31-16-245" Jun 25 14:16:40.763317 kubelet[2595]: I0625 14:16:40.763189 2595 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5aad3a82c91e0d331534ad830377090-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-245\" (UID: \"b5aad3a82c91e0d331534ad830377090\") " pod="kube-system/kube-controller-manager-ip-172-31-16-245" Jun 25 14:16:40.763317 kubelet[2595]: I0625 14:16:40.763257 2595 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5aad3a82c91e0d331534ad830377090-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-245\" (UID: \"b5aad3a82c91e0d331534ad830377090\") " pod="kube-system/kube-controller-manager-ip-172-31-16-245" Jun 25 14:16:40.765774 kubelet[2595]: E0625 14:16:40.765090 2595 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-245?timeout=10s\": dial tcp 172.31.16.245:6443: connect: connection refused" interval="400ms" Jun 25 14:16:40.782062 systemd[1]: Created slice kubepods-burstable-podb5aad3a82c91e0d331534ad830377090.slice - libcontainer container kubepods-burstable-podb5aad3a82c91e0d331534ad830377090.slice. Jun 25 14:16:40.869155 kubelet[2595]: I0625 14:16:40.869106 2595 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-245" Jun 25 14:16:40.870043 kubelet[2595]: E0625 14:16:40.869985 2595 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.245:6443/api/v1/nodes\": dial tcp 172.31.16.245:6443: connect: connection refused" node="ip-172-31-16-245" Jun 25 14:16:41.075249 containerd[1804]: time="2024-06-25T14:16:41.075005780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-245,Uid:649c8310f37d09ac34a3aeb81b0b8f5b,Namespace:kube-system,Attempt:0,}" Jun 25 14:16:41.077719 containerd[1804]: time="2024-06-25T14:16:41.077431921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-245,Uid:747038e66efc595a67c79c0bd4d9db23,Namespace:kube-system,Attempt:0,}" Jun 25 14:16:41.088418 containerd[1804]: time="2024-06-25T14:16:41.088329875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-245,Uid:b5aad3a82c91e0d331534ad830377090,Namespace:kube-system,Attempt:0,}" Jun 25 14:16:41.167123 kubelet[2595]: E0625 14:16:41.167055 2595 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-245?timeout=10s\": dial tcp 172.31.16.245:6443: connect: connection refused" interval="800ms" Jun 25 14:16:41.272645 kubelet[2595]: I0625 14:16:41.272591 2595 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-245" Jun 25 14:16:41.273167 kubelet[2595]: E0625 14:16:41.273119 2595 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.245:6443/api/v1/nodes\": dial tcp 172.31.16.245:6443: connect: connection refused" node="ip-172-31-16-245" Jun 25 14:16:41.362261 kubelet[2595]: W0625 14:16:41.362152 2595 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.16.245:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:41.362414 kubelet[2595]: E0625 14:16:41.362287 2595 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.245:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:41.587223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount618940373.mount: Deactivated successfully. Jun 25 14:16:41.601001 containerd[1804]: time="2024-06-25T14:16:41.600942990Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:16:41.603126 containerd[1804]: time="2024-06-25T14:16:41.603079364Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:16:41.605150 containerd[1804]: time="2024-06-25T14:16:41.605078912Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jun 25 14:16:41.605556 containerd[1804]: time="2024-06-25T14:16:41.605505313Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:16:41.606384 containerd[1804]: time="2024-06-25T14:16:41.606341483Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:16:41.608195 containerd[1804]: time="2024-06-25T14:16:41.608146688Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:16:41.609984 containerd[1804]: time="2024-06-25T14:16:41.609936882Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:16:41.612084 containerd[1804]: time="2024-06-25T14:16:41.612018919Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:16:41.612224 containerd[1804]: time="2024-06-25T14:16:41.612128264Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:16:41.619543 containerd[1804]: time="2024-06-25T14:16:41.617792860Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:16:41.619727 kubelet[2595]: W0625 14:16:41.619386 2595 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.16.245:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:41.619727 kubelet[2595]: E0625 14:16:41.619472 2595 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.245:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:41.624072 containerd[1804]: time="2024-06-25T14:16:41.624004818Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:16:41.626420 kubelet[2595]: W0625 14:16:41.626279 2595 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.16.245:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:41.626420 kubelet[2595]: E0625 14:16:41.626386 2595 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.245:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:41.627128 containerd[1804]: time="2024-06-25T14:16:41.627071863Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.414447ms" Jun 25 14:16:41.630008 containerd[1804]: time="2024-06-25T14:16:41.629938241Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.766896ms" Jun 25 14:16:41.630313 containerd[1804]: time="2024-06-25T14:16:41.630259977Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:16:41.632237 containerd[1804]: time="2024-06-25T14:16:41.632164808Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 543.675174ms" Jun 25 14:16:41.633245 containerd[1804]: time="2024-06-25T14:16:41.633188648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:16:41.634842 containerd[1804]: time="2024-06-25T14:16:41.634791339Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:16:41.636438 containerd[1804]: time="2024-06-25T14:16:41.636377590Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:16:41.936122 containerd[1804]: time="2024-06-25T14:16:41.934833218Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:16:41.936385 containerd[1804]: time="2024-06-25T14:16:41.935089325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:16:41.936385 containerd[1804]: time="2024-06-25T14:16:41.935179722Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:16:41.936385 containerd[1804]: time="2024-06-25T14:16:41.935219082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:16:41.937211 containerd[1804]: time="2024-06-25T14:16:41.937077784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:16:41.937362 containerd[1804]: time="2024-06-25T14:16:41.937178550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:16:41.937362 containerd[1804]: time="2024-06-25T14:16:41.937236150Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:16:41.937362 containerd[1804]: time="2024-06-25T14:16:41.937271911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:16:41.948436 containerd[1804]: time="2024-06-25T14:16:41.948077548Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:16:41.948436 containerd[1804]: time="2024-06-25T14:16:41.948161237Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:16:41.948436 containerd[1804]: time="2024-06-25T14:16:41.948193001Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:16:41.948436 containerd[1804]: time="2024-06-25T14:16:41.948217506Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:16:41.968446 kubelet[2595]: E0625 14:16:41.968380 2595 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.16.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-245?timeout=10s\": dial tcp 172.31.16.245:6443: connect: connection refused" interval="1.6s" Jun 25 14:16:41.986978 systemd[1]: Started cri-containerd-86ff4d4c19d0e828dc553cfbba0c7699f781c80bf8b9d4142c942149b64d7333.scope - libcontainer container 86ff4d4c19d0e828dc553cfbba0c7699f781c80bf8b9d4142c942149b64d7333. Jun 25 14:16:41.998267 systemd[1]: Started cri-containerd-e9130a9e279d286d2d3ba03a442da26f11c64aa21e109fb3104f684aaafeed73.scope - libcontainer container e9130a9e279d286d2d3ba03a442da26f11c64aa21e109fb3104f684aaafeed73. Jun 25 14:16:42.021987 systemd[1]: Started cri-containerd-cb7debc610bee626c7867976c8bce0e59e563a183e7c4928010ce54fec99cd7d.scope - libcontainer container cb7debc610bee626c7867976c8bce0e59e563a183e7c4928010ce54fec99cd7d. Jun 25 14:16:42.030000 audit: BPF prog-id=61 op=LOAD Jun 25 14:16:42.031000 audit: BPF prog-id=62 op=LOAD Jun 25 14:16:42.031000 audit[2688]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2657 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836666634643463313964306538323864633535336366626261306337 Jun 25 14:16:42.031000 audit: BPF prog-id=63 op=LOAD Jun 25 14:16:42.031000 audit[2688]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2657 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836666634643463313964306538323864633535336366626261306337 Jun 25 14:16:42.032000 audit: BPF prog-id=63 op=UNLOAD Jun 25 14:16:42.032000 audit: BPF prog-id=62 op=UNLOAD Jun 25 14:16:42.032000 audit: BPF prog-id=64 op=LOAD Jun 25 14:16:42.032000 audit[2688]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2657 pid=2688 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836666634643463313964306538323864633535336366626261306337 Jun 25 14:16:42.044000 audit: BPF prog-id=65 op=LOAD Jun 25 14:16:42.045000 audit: BPF prog-id=66 op=LOAD Jun 25 14:16:42.045000 audit[2693]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=2658 pid=2693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539313330613965323739643238366432643362613033613434326461 Jun 25 14:16:42.045000 audit: BPF prog-id=67 op=LOAD Jun 25 14:16:42.045000 audit[2693]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=2658 pid=2693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539313330613965323739643238366432643362613033613434326461 Jun 25 14:16:42.045000 audit: BPF prog-id=67 op=UNLOAD Jun 25 14:16:42.045000 audit: BPF prog-id=66 op=UNLOAD Jun 25 14:16:42.045000 audit: BPF prog-id=68 op=LOAD Jun 25 14:16:42.045000 audit[2693]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=2658 pid=2693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.045000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539313330613965323739643238366432643362613033613434326461 Jun 25 14:16:42.064000 audit: BPF prog-id=69 op=LOAD Jun 25 14:16:42.076733 kubelet[2595]: I0625 14:16:42.075891 2595 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-245" Jun 25 14:16:42.076733 kubelet[2595]: E0625 14:16:42.076384 2595 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://172.31.16.245:6443/api/v1/nodes\": dial tcp 172.31.16.245:6443: connect: connection refused" node="ip-172-31-16-245" Jun 25 14:16:42.076000 audit: BPF prog-id=70 op=LOAD Jun 25 14:16:42.076000 audit[2700]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2659 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.076000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376465626336313062656536323663373836373937366338626365 Jun 25 14:16:42.077000 audit: BPF prog-id=71 op=LOAD Jun 25 14:16:42.077000 audit[2700]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2659 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376465626336313062656536323663373836373937366338626365 Jun 25 14:16:42.077000 audit: BPF prog-id=71 op=UNLOAD Jun 25 14:16:42.077000 audit: BPF prog-id=70 op=UNLOAD Jun 25 14:16:42.077000 audit: BPF prog-id=72 op=LOAD Jun 25 14:16:42.077000 audit[2700]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2659 pid=2700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.077000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6362376465626336313062656536323663373836373937366338626365 Jun 25 14:16:42.095686 kubelet[2595]: W0625 14:16:42.095179 2595 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.16.245:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-245&limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:42.095686 kubelet[2595]: E0625 14:16:42.095287 2595 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.245:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-245&limit=500&resourceVersion=0": dial tcp 172.31.16.245:6443: connect: connection refused Jun 25 14:16:42.115383 containerd[1804]: time="2024-06-25T14:16:42.115313276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-245,Uid:649c8310f37d09ac34a3aeb81b0b8f5b,Namespace:kube-system,Attempt:0,} returns sandbox id \"86ff4d4c19d0e828dc553cfbba0c7699f781c80bf8b9d4142c942149b64d7333\"" Jun 25 14:16:42.125341 containerd[1804]: time="2024-06-25T14:16:42.125283181Z" level=info msg="CreateContainer within sandbox \"86ff4d4c19d0e828dc553cfbba0c7699f781c80bf8b9d4142c942149b64d7333\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 14:16:42.137010 containerd[1804]: time="2024-06-25T14:16:42.136943389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-245,Uid:747038e66efc595a67c79c0bd4d9db23,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9130a9e279d286d2d3ba03a442da26f11c64aa21e109fb3104f684aaafeed73\"" Jun 25 14:16:42.141599 containerd[1804]: time="2024-06-25T14:16:42.141525917Z" level=info msg="CreateContainer within sandbox \"e9130a9e279d286d2d3ba03a442da26f11c64aa21e109fb3104f684aaafeed73\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 14:16:42.167293 containerd[1804]: time="2024-06-25T14:16:42.167235392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-245,Uid:b5aad3a82c91e0d331534ad830377090,Namespace:kube-system,Attempt:0,} returns sandbox id \"cb7debc610bee626c7867976c8bce0e59e563a183e7c4928010ce54fec99cd7d\"" Jun 25 14:16:42.172462 containerd[1804]: time="2024-06-25T14:16:42.172401259Z" level=info msg="CreateContainer within sandbox \"cb7debc610bee626c7867976c8bce0e59e563a183e7c4928010ce54fec99cd7d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 14:16:42.174165 containerd[1804]: time="2024-06-25T14:16:42.174086066Z" level=info msg="CreateContainer within sandbox \"e9130a9e279d286d2d3ba03a442da26f11c64aa21e109fb3104f684aaafeed73\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2e80d2d01d7a64b456e0c1fa9def66956bb999711866bcc47564d8ce452c04a0\"" Jun 25 14:16:42.175627 containerd[1804]: time="2024-06-25T14:16:42.175544658Z" level=info msg="StartContainer for \"2e80d2d01d7a64b456e0c1fa9def66956bb999711866bcc47564d8ce452c04a0\"" Jun 25 14:16:42.178619 containerd[1804]: time="2024-06-25T14:16:42.178544548Z" level=info msg="CreateContainer within sandbox \"86ff4d4c19d0e828dc553cfbba0c7699f781c80bf8b9d4142c942149b64d7333\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c460cef95ffc0bb6a1e6c7ddad5a54e276c0058e09bf9c6868dbd0629534d2b4\"" Jun 25 14:16:42.179567 containerd[1804]: time="2024-06-25T14:16:42.179490627Z" level=info msg="StartContainer for \"c460cef95ffc0bb6a1e6c7ddad5a54e276c0058e09bf9c6868dbd0629534d2b4\"" Jun 25 14:16:42.246743 containerd[1804]: time="2024-06-25T14:16:42.245544163Z" level=info msg="CreateContainer within sandbox \"cb7debc610bee626c7867976c8bce0e59e563a183e7c4928010ce54fec99cd7d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"32bff08419aba10b4de7c747e53516f1decc1023bf8445f719384cac28bf32ef\"" Jun 25 14:16:42.249469 containerd[1804]: time="2024-06-25T14:16:42.249378387Z" level=info msg="StartContainer for \"32bff08419aba10b4de7c747e53516f1decc1023bf8445f719384cac28bf32ef\"" Jun 25 14:16:42.263969 systemd[1]: Started cri-containerd-2e80d2d01d7a64b456e0c1fa9def66956bb999711866bcc47564d8ce452c04a0.scope - libcontainer container 2e80d2d01d7a64b456e0c1fa9def66956bb999711866bcc47564d8ce452c04a0. Jun 25 14:16:42.281987 systemd[1]: Started cri-containerd-c460cef95ffc0bb6a1e6c7ddad5a54e276c0058e09bf9c6868dbd0629534d2b4.scope - libcontainer container c460cef95ffc0bb6a1e6c7ddad5a54e276c0058e09bf9c6868dbd0629534d2b4. Jun 25 14:16:42.298000 audit: BPF prog-id=73 op=LOAD Jun 25 14:16:42.300000 audit: BPF prog-id=74 op=LOAD Jun 25 14:16:42.300000 audit[2774]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=2658 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265383064326430316437613634623435366530633166613964656636 Jun 25 14:16:42.300000 audit: BPF prog-id=75 op=LOAD Jun 25 14:16:42.300000 audit[2774]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=2658 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.300000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265383064326430316437613634623435366530633166613964656636 Jun 25 14:16:42.300000 audit: BPF prog-id=75 op=UNLOAD Jun 25 14:16:42.301000 audit: BPF prog-id=74 op=UNLOAD Jun 25 14:16:42.301000 audit: BPF prog-id=76 op=LOAD Jun 25 14:16:42.301000 audit[2774]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=2658 pid=2774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.301000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265383064326430316437613634623435366530633166613964656636 Jun 25 14:16:42.314000 audit: BPF prog-id=77 op=LOAD Jun 25 14:16:42.315000 audit: BPF prog-id=78 op=LOAD Jun 25 14:16:42.315000 audit[2775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=2657 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363063656639356666633062623661316536633764646164356135 Jun 25 14:16:42.315000 audit: BPF prog-id=79 op=LOAD Jun 25 14:16:42.315000 audit[2775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=2657 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363063656639356666633062623661316536633764646164356135 Jun 25 14:16:42.315000 audit: BPF prog-id=79 op=UNLOAD Jun 25 14:16:42.315000 audit: BPF prog-id=78 op=UNLOAD Jun 25 14:16:42.315000 audit: BPF prog-id=80 op=LOAD Jun 25 14:16:42.315000 audit[2775]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=2657 pid=2775 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.315000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334363063656639356666633062623661316536633764646164356135 Jun 25 14:16:42.341993 systemd[1]: Started cri-containerd-32bff08419aba10b4de7c747e53516f1decc1023bf8445f719384cac28bf32ef.scope - libcontainer container 32bff08419aba10b4de7c747e53516f1decc1023bf8445f719384cac28bf32ef. Jun 25 14:16:42.373000 audit: BPF prog-id=81 op=LOAD Jun 25 14:16:42.374000 audit: BPF prog-id=82 op=LOAD Jun 25 14:16:42.374000 audit[2814]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=2659 pid=2814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332626666303834313961626131306234646537633734376535333531 Jun 25 14:16:42.384000 audit: BPF prog-id=83 op=LOAD Jun 25 14:16:42.384000 audit[2814]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=2659 pid=2814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332626666303834313961626131306234646537633734376535333531 Jun 25 14:16:42.386000 audit: BPF prog-id=83 op=UNLOAD Jun 25 14:16:42.386000 audit: BPF prog-id=82 op=UNLOAD Jun 25 14:16:42.387000 audit: BPF prog-id=84 op=LOAD Jun 25 14:16:42.387000 audit[2814]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=2659 pid=2814 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:42.387000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332626666303834313961626131306234646537633734376535333531 Jun 25 14:16:42.396467 containerd[1804]: time="2024-06-25T14:16:42.396400956Z" level=info msg="StartContainer for \"c460cef95ffc0bb6a1e6c7ddad5a54e276c0058e09bf9c6868dbd0629534d2b4\" returns successfully" Jun 25 14:16:42.423639 containerd[1804]: time="2024-06-25T14:16:42.423580112Z" level=info msg="StartContainer for \"2e80d2d01d7a64b456e0c1fa9def66956bb999711866bcc47564d8ce452c04a0\" returns successfully" Jun 25 14:16:42.477560 containerd[1804]: time="2024-06-25T14:16:42.477494107Z" level=info msg="StartContainer for \"32bff08419aba10b4de7c747e53516f1decc1023bf8445f719384cac28bf32ef\" returns successfully" Jun 25 14:16:43.679265 kubelet[2595]: I0625 14:16:43.679215 2595 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-245" Jun 25 14:16:45.869159 kernel: kauditd_printk_skb: 131 callbacks suppressed Jun 25 14:16:45.869320 kernel: audit: type=1400 audit(1719325005.862:339): avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:16:45.862000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:16:45.862000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=42 a1=400755c000 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:16:45.862000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:16:45.883482 kernel: audit: type=1300 audit(1719325005.862:339): arch=c00000b7 syscall=27 success=no exit=-13 a0=42 a1=400755c000 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:16:45.883578 kernel: audit: type=1327 audit(1719325005.862:339): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:16:45.894681 kernel: audit: type=1400 audit(1719325005.874:340): avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=185 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:16:45.874000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=185 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:16:45.874000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=42 a1=400755c060 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:16:45.903568 kernel: audit: type=1300 audit(1719325005.874:340): arch=c00000b7 syscall=27 success=no exit=-13 a0=42 a1=400755c060 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:16:45.874000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:16:45.910510 kernel: audit: type=1327 audit(1719325005.874:340): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:16:45.874000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:16:45.874000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=42 a1=400755c120 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:16:45.922875 kernel: audit: type=1400 audit(1719325005.874:341): avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:16:45.922989 kernel: audit: type=1300 audit(1719325005.874:341): arch=c00000b7 syscall=27 success=no exit=-13 a0=42 a1=400755c120 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:16:45.874000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:16:45.929496 kernel: audit: type=1327 audit(1719325005.874:341): proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:16:45.895000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:16:45.933969 kernel: audit: type=1400 audit(1719325005.895:342): avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:16:45.895000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=4e a1=4007021840 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:16:45.895000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:16:45.916000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:16:45.916000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=8 a1=4000a76fc0 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:16:45.916000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:16:45.921000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:16:45.921000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=8 a1=4000151960 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:16:45.921000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:16:46.019000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:16:46.019000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=83 a1=4006a15e40 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:16:46.019000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:16:46.022000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:16:46.022000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=83 a1=40067cec00 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:16:46.022000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:16:46.227115 kubelet[2595]: E0625 14:16:46.226968 2595 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-245\" not found" node="ip-172-31-16-245" Jun 25 14:16:46.436865 kubelet[2595]: I0625 14:16:46.436811 2595 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-245" Jun 25 14:16:46.546983 kubelet[2595]: I0625 14:16:46.546853 2595 apiserver.go:52] "Watching apiserver" Jun 25 14:16:46.561474 kubelet[2595]: I0625 14:16:46.561416 2595 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 14:16:46.635774 update_engine[1795]: I0625 14:16:46.635716 1795 update_attempter.cc:509] Updating boot flags... Jun 25 14:16:46.758733 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (2883) Jun 25 14:16:48.368008 systemd[1]: Reloading. Jun 25 14:16:48.798316 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:16:49.000000 audit: BPF prog-id=85 op=LOAD Jun 25 14:16:49.000000 audit: BPF prog-id=47 op=UNLOAD Jun 25 14:16:49.001000 audit: BPF prog-id=86 op=LOAD Jun 25 14:16:49.001000 audit: BPF prog-id=87 op=LOAD Jun 25 14:16:49.001000 audit: BPF prog-id=48 op=UNLOAD Jun 25 14:16:49.001000 audit: BPF prog-id=49 op=UNLOAD Jun 25 14:16:49.002000 audit: BPF prog-id=88 op=LOAD Jun 25 14:16:49.002000 audit: BPF prog-id=65 op=UNLOAD Jun 25 14:16:49.003000 audit: BPF prog-id=89 op=LOAD Jun 25 14:16:49.003000 audit: BPF prog-id=90 op=LOAD Jun 25 14:16:49.003000 audit: BPF prog-id=50 op=UNLOAD Jun 25 14:16:49.003000 audit: BPF prog-id=51 op=UNLOAD Jun 25 14:16:49.004000 audit: BPF prog-id=91 op=LOAD Jun 25 14:16:49.004000 audit: BPF prog-id=52 op=UNLOAD Jun 25 14:16:49.007000 audit: BPF prog-id=92 op=LOAD Jun 25 14:16:49.007000 audit: BPF prog-id=53 op=UNLOAD Jun 25 14:16:49.007000 audit: BPF prog-id=93 op=LOAD Jun 25 14:16:49.007000 audit: BPF prog-id=94 op=LOAD Jun 25 14:16:49.008000 audit: BPF prog-id=54 op=UNLOAD Jun 25 14:16:49.008000 audit: BPF prog-id=55 op=UNLOAD Jun 25 14:16:49.008000 audit: BPF prog-id=95 op=LOAD Jun 25 14:16:49.009000 audit: BPF prog-id=56 op=UNLOAD Jun 25 14:16:49.011000 audit: BPF prog-id=96 op=LOAD Jun 25 14:16:49.011000 audit: BPF prog-id=73 op=UNLOAD Jun 25 14:16:49.014000 audit: BPF prog-id=97 op=LOAD Jun 25 14:16:49.014000 audit: BPF prog-id=69 op=UNLOAD Jun 25 14:16:49.015000 audit: BPF prog-id=98 op=LOAD Jun 25 14:16:49.015000 audit: BPF prog-id=57 op=UNLOAD Jun 25 14:16:49.020000 audit: BPF prog-id=99 op=LOAD Jun 25 14:16:49.020000 audit: BPF prog-id=81 op=UNLOAD Jun 25 14:16:49.021000 audit: BPF prog-id=100 op=LOAD Jun 25 14:16:49.021000 audit: BPF prog-id=77 op=UNLOAD Jun 25 14:16:49.022000 audit: BPF prog-id=101 op=LOAD Jun 25 14:16:49.022000 audit: BPF prog-id=61 op=UNLOAD Jun 25 14:16:49.026000 audit: BPF prog-id=102 op=LOAD Jun 25 14:16:49.026000 audit: BPF prog-id=58 op=UNLOAD Jun 25 14:16:49.027000 audit: BPF prog-id=103 op=LOAD Jun 25 14:16:49.027000 audit: BPF prog-id=104 op=LOAD Jun 25 14:16:49.027000 audit: BPF prog-id=59 op=UNLOAD Jun 25 14:16:49.027000 audit: BPF prog-id=60 op=UNLOAD Jun 25 14:16:49.074292 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:16:49.094315 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:16:49.094773 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:49.094864 systemd[1]: kubelet.service: Consumed 1.863s CPU time. Jun 25 14:16:49.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:49.102425 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:16:49.445091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:49.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:49.580098 kubelet[3044]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:16:49.580098 kubelet[3044]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:16:49.580794 kubelet[3044]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:16:49.580794 kubelet[3044]: I0625 14:16:49.580232 3044 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:16:49.593050 kubelet[3044]: I0625 14:16:49.592992 3044 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jun 25 14:16:49.593050 kubelet[3044]: I0625 14:16:49.593034 3044 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:16:49.593385 kubelet[3044]: I0625 14:16:49.593356 3044 server.go:927] "Client rotation is on, will bootstrap in background" Jun 25 14:16:49.597465 kubelet[3044]: I0625 14:16:49.597407 3044 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 14:16:49.600547 kubelet[3044]: I0625 14:16:49.600511 3044 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:16:49.614855 kubelet[3044]: I0625 14:16:49.614791 3044 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:16:49.615480 kubelet[3044]: I0625 14:16:49.615425 3044 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:16:49.616023 kubelet[3044]: I0625 14:16:49.615632 3044 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-16-245","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:16:49.616282 kubelet[3044]: I0625 14:16:49.616250 3044 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:16:49.616424 kubelet[3044]: I0625 14:16:49.616403 3044 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:16:49.616595 kubelet[3044]: I0625 14:16:49.616573 3044 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:16:49.617001 kubelet[3044]: I0625 14:16:49.616967 3044 kubelet.go:400] "Attempting to sync node with API server" Jun 25 14:16:49.617717 kubelet[3044]: I0625 14:16:49.617651 3044 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:16:49.617977 kubelet[3044]: I0625 14:16:49.617954 3044 kubelet.go:312] "Adding apiserver pod source" Jun 25 14:16:49.618115 kubelet[3044]: I0625 14:16:49.618095 3044 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:16:49.626388 kubelet[3044]: I0625 14:16:49.626350 3044 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:16:49.655171 kubelet[3044]: I0625 14:16:49.655135 3044 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 25 14:16:49.656301 kubelet[3044]: I0625 14:16:49.656268 3044 server.go:1264] "Started kubelet" Jun 25 14:16:49.682035 kubelet[3044]: I0625 14:16:49.681950 3044 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 25 14:16:49.683230 kubelet[3044]: I0625 14:16:49.683198 3044 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:16:49.687492 kubelet[3044]: I0625 14:16:49.687421 3044 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:16:49.692277 kubelet[3044]: I0625 14:16:49.692225 3044 server.go:455] "Adding debug handlers to kubelet server" Jun 25 14:16:49.698000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/opt/libexec/kubernetes/kubelet-plugins/volume/exec" dev="nvme0n1p9" ino=7827 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:usr_t:s0 tclass=dir permissive=0 Jun 25 14:16:49.698000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=8 a1=4000c3c2c0 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:16:49.698000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:16:49.701655 kubelet[3044]: I0625 14:16:49.700965 3044 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:16:49.723436 kubelet[3044]: I0625 14:16:49.723398 3044 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:16:49.724424 kubelet[3044]: I0625 14:16:49.724386 3044 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jun 25 14:16:49.724946 kubelet[3044]: I0625 14:16:49.724921 3044 reconciler.go:26] "Reconciler: start to sync state" Jun 25 14:16:49.730428 kubelet[3044]: I0625 14:16:49.730265 3044 factory.go:221] Registration of the systemd container factory successfully Jun 25 14:16:49.730619 kubelet[3044]: I0625 14:16:49.730430 3044 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 25 14:16:49.751602 kubelet[3044]: E0625 14:16:49.751556 3044 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:16:49.752810 kubelet[3044]: I0625 14:16:49.752765 3044 factory.go:221] Registration of the containerd container factory successfully Jun 25 14:16:49.775876 kubelet[3044]: I0625 14:16:49.773858 3044 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:16:49.776785 kubelet[3044]: I0625 14:16:49.776592 3044 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:16:49.776785 kubelet[3044]: I0625 14:16:49.776776 3044 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:16:49.778723 kubelet[3044]: I0625 14:16:49.776812 3044 kubelet.go:2337] "Starting kubelet main sync loop" Jun 25 14:16:49.778723 kubelet[3044]: E0625 14:16:49.776902 3044 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:16:49.839338 kubelet[3044]: I0625 14:16:49.838314 3044 kubelet_node_status.go:73] "Attempting to register node" node="ip-172-31-16-245" Jun 25 14:16:49.878989 kubelet[3044]: I0625 14:16:49.878945 3044 kubelet_node_status.go:112] "Node was previously registered" node="ip-172-31-16-245" Jun 25 14:16:49.879310 kubelet[3044]: I0625 14:16:49.879287 3044 kubelet_node_status.go:76] "Successfully registered node" node="ip-172-31-16-245" Jun 25 14:16:49.881033 kubelet[3044]: E0625 14:16:49.880986 3044 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 14:16:49.948359 kubelet[3044]: I0625 14:16:49.948311 3044 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:16:49.948359 kubelet[3044]: I0625 14:16:49.948348 3044 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:16:49.948605 kubelet[3044]: I0625 14:16:49.948387 3044 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:16:49.948740 kubelet[3044]: I0625 14:16:49.948697 3044 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 14:16:49.948845 kubelet[3044]: I0625 14:16:49.948736 3044 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 14:16:49.948845 kubelet[3044]: I0625 14:16:49.948777 3044 policy_none.go:49] "None policy: Start" Jun 25 14:16:49.951704 kubelet[3044]: I0625 14:16:49.951476 3044 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 25 14:16:49.951704 kubelet[3044]: I0625 14:16:49.951533 3044 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:16:49.954517 kubelet[3044]: I0625 14:16:49.953755 3044 state_mem.go:75] "Updated machine memory state" Jun 25 14:16:49.992586 kubelet[3044]: I0625 14:16:49.992542 3044 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:16:49.992958 kubelet[3044]: I0625 14:16:49.992889 3044 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 25 14:16:49.995608 kubelet[3044]: I0625 14:16:49.995566 3044 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:16:50.081879 kubelet[3044]: I0625 14:16:50.081824 3044 topology_manager.go:215] "Topology Admit Handler" podUID="747038e66efc595a67c79c0bd4d9db23" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-16-245" Jun 25 14:16:50.082985 kubelet[3044]: I0625 14:16:50.082353 3044 topology_manager.go:215] "Topology Admit Handler" podUID="649c8310f37d09ac34a3aeb81b0b8f5b" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-16-245" Jun 25 14:16:50.083635 kubelet[3044]: I0625 14:16:50.083442 3044 topology_manager.go:215] "Topology Admit Handler" podUID="b5aad3a82c91e0d331534ad830377090" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-16-245" Jun 25 14:16:50.093500 kubelet[3044]: E0625 14:16:50.093334 3044 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-16-245\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-245" Jun 25 14:16:50.098315 kubelet[3044]: E0625 14:16:50.098259 3044 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-16-245\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-245" Jun 25 14:16:50.126641 kubelet[3044]: I0625 14:16:50.126562 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b5aad3a82c91e0d331534ad830377090-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-245\" (UID: \"b5aad3a82c91e0d331534ad830377090\") " pod="kube-system/kube-controller-manager-ip-172-31-16-245" Jun 25 14:16:50.127069 kubelet[3044]: I0625 14:16:50.127006 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5aad3a82c91e0d331534ad830377090-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-245\" (UID: \"b5aad3a82c91e0d331534ad830377090\") " pod="kube-system/kube-controller-manager-ip-172-31-16-245" Jun 25 14:16:50.127392 kubelet[3044]: I0625 14:16:50.127354 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b5aad3a82c91e0d331534ad830377090-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-245\" (UID: \"b5aad3a82c91e0d331534ad830377090\") " pod="kube-system/kube-controller-manager-ip-172-31-16-245" Jun 25 14:16:50.127647 kubelet[3044]: I0625 14:16:50.127612 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5aad3a82c91e0d331534ad830377090-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-245\" (UID: \"b5aad3a82c91e0d331534ad830377090\") " pod="kube-system/kube-controller-manager-ip-172-31-16-245" Jun 25 14:16:50.128027 kubelet[3044]: I0625 14:16:50.127982 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/747038e66efc595a67c79c0bd4d9db23-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-245\" (UID: \"747038e66efc595a67c79c0bd4d9db23\") " pod="kube-system/kube-scheduler-ip-172-31-16-245" Jun 25 14:16:50.128339 kubelet[3044]: I0625 14:16:50.128303 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/649c8310f37d09ac34a3aeb81b0b8f5b-ca-certs\") pod \"kube-apiserver-ip-172-31-16-245\" (UID: \"649c8310f37d09ac34a3aeb81b0b8f5b\") " pod="kube-system/kube-apiserver-ip-172-31-16-245" Jun 25 14:16:50.128617 kubelet[3044]: I0625 14:16:50.128576 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/649c8310f37d09ac34a3aeb81b0b8f5b-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-245\" (UID: \"649c8310f37d09ac34a3aeb81b0b8f5b\") " pod="kube-system/kube-apiserver-ip-172-31-16-245" Jun 25 14:16:50.128926 kubelet[3044]: I0625 14:16:50.128886 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/649c8310f37d09ac34a3aeb81b0b8f5b-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-245\" (UID: \"649c8310f37d09ac34a3aeb81b0b8f5b\") " pod="kube-system/kube-apiserver-ip-172-31-16-245" Jun 25 14:16:50.129295 kubelet[3044]: I0625 14:16:50.129243 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5aad3a82c91e0d331534ad830377090-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-245\" (UID: \"b5aad3a82c91e0d331534ad830377090\") " pod="kube-system/kube-controller-manager-ip-172-31-16-245" Jun 25 14:16:50.619622 kubelet[3044]: I0625 14:16:50.619576 3044 apiserver.go:52] "Watching apiserver" Jun 25 14:16:50.624844 kubelet[3044]: I0625 14:16:50.624780 3044 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jun 25 14:16:50.891579 kubelet[3044]: E0625 14:16:50.890985 3044 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-16-245\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-245" Jun 25 14:16:50.892867 kubelet[3044]: E0625 14:16:50.892818 3044 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-16-245\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-245" Jun 25 14:16:51.002114 kubelet[3044]: I0625 14:16:51.001993 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-245" podStartSLOduration=3.001970777 podStartE2EDuration="3.001970777s" podCreationTimestamp="2024-06-25 14:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:16:50.91438382 +0000 UTC m=+1.452559442" watchObservedRunningTime="2024-06-25 14:16:51.001970777 +0000 UTC m=+1.540146387" Jun 25 14:16:51.060928 kubelet[3044]: I0625 14:16:51.060834 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-245" podStartSLOduration=1.060813969 podStartE2EDuration="1.060813969s" podCreationTimestamp="2024-06-25 14:16:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:16:51.003761094 +0000 UTC m=+1.541936716" watchObservedRunningTime="2024-06-25 14:16:51.060813969 +0000 UTC m=+1.598989591" Jun 25 14:16:51.100219 kubelet[3044]: I0625 14:16:51.100137 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-245" podStartSLOduration=4.100090552 podStartE2EDuration="4.100090552s" podCreationTimestamp="2024-06-25 14:16:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:16:51.062187919 +0000 UTC m=+1.600363541" watchObservedRunningTime="2024-06-25 14:16:51.100090552 +0000 UTC m=+1.638266186" Jun 25 14:16:54.421913 sudo[2090]: pam_unix(sudo:session): session closed for user root Jun 25 14:16:54.420000 audit[2090]: USER_END pid=2090 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:54.422987 kernel: kauditd_printk_skb: 59 callbacks suppressed Jun 25 14:16:54.423081 kernel: audit: type=1106 audit(1719325014.420:390): pid=2090 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:54.420000 audit[2090]: CRED_DISP pid=2090 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:54.431036 kernel: audit: type=1104 audit(1719325014.420:391): pid=2090 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:54.450972 sshd[2087]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:54.451000 audit[2087]: USER_END pid=2087 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:54.451000 audit[2087]: CRED_DISP pid=2087 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:54.464302 kernel: audit: type=1106 audit(1719325014.451:392): pid=2087 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:54.464430 kernel: audit: type=1104 audit(1719325014.451:393): pid=2087 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:54.465077 systemd[1]: sshd@6-172.31.16.245:22-139.178.68.195:49600.service: Deactivated successfully. Jun 25 14:16:54.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.245:22-139.178.68.195:49600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:54.466438 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 14:16:54.467153 systemd[1]: session-7.scope: Consumed 10.531s CPU time. Jun 25 14:16:54.470036 systemd-logind[1794]: Session 7 logged out. Waiting for processes to exit. Jun 25 14:16:54.472039 kernel: audit: type=1131 audit(1719325014.463:394): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.245:22-139.178.68.195:49600 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:54.472880 systemd-logind[1794]: Removed session 7. Jun 25 14:17:03.039000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:03.039000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000f20f60 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:17:03.050806 kernel: audit: type=1400 audit(1719325023.039:395): avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:03.050960 kernel: audit: type=1300 audit(1719325023.039:395): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000f20f60 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:17:03.051012 kernel: audit: type=1327 audit(1719325023.039:395): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:17:03.039000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:17:03.039000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:03.059785 kernel: audit: type=1400 audit(1719325023.039:396): avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:03.059935 kernel: audit: type=1300 audit(1719325023.039:396): arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4000f21160 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:17:03.039000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4000f21160 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:17:03.039000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:17:03.070575 kernel: audit: type=1327 audit(1719325023.039:396): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:17:03.039000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:03.078279 kernel: audit: type=1400 audit(1719325023.039:397): avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:03.039000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4000f21180 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:17:03.084153 kernel: audit: type=1300 audit(1719325023.039:397): arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=4000f21180 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:17:03.039000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:17:03.089103 kernel: audit: type=1327 audit(1719325023.039:397): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:17:03.043000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:03.093531 kernel: audit: type=1400 audit(1719325023.043:398): avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:03.043000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001402d40 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:17:03.043000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:17:03.498633 kubelet[3044]: I0625 14:17:03.498581 3044 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 14:17:03.499408 containerd[1804]: time="2024-06-25T14:17:03.499316603Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 14:17:03.499927 kubelet[3044]: I0625 14:17:03.499815 3044 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 14:17:03.935782 kubelet[3044]: I0625 14:17:03.935711 3044 topology_manager.go:215] "Topology Admit Handler" podUID="7ba52c99-18d4-42d5-960e-e1d8aa028c8c" podNamespace="kube-system" podName="kube-proxy-9xsxb" Jun 25 14:17:03.960267 systemd[1]: Created slice kubepods-besteffort-pod7ba52c99_18d4_42d5_960e_e1d8aa028c8c.slice - libcontainer container kubepods-besteffort-pod7ba52c99_18d4_42d5_960e_e1d8aa028c8c.slice. Jun 25 14:17:04.016695 kubelet[3044]: I0625 14:17:04.016615 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7ba52c99-18d4-42d5-960e-e1d8aa028c8c-kube-proxy\") pod \"kube-proxy-9xsxb\" (UID: \"7ba52c99-18d4-42d5-960e-e1d8aa028c8c\") " pod="kube-system/kube-proxy-9xsxb" Jun 25 14:17:04.016894 kubelet[3044]: I0625 14:17:04.016720 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ba52c99-18d4-42d5-960e-e1d8aa028c8c-lib-modules\") pod \"kube-proxy-9xsxb\" (UID: \"7ba52c99-18d4-42d5-960e-e1d8aa028c8c\") " pod="kube-system/kube-proxy-9xsxb" Jun 25 14:17:04.016894 kubelet[3044]: I0625 14:17:04.016764 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7ba52c99-18d4-42d5-960e-e1d8aa028c8c-xtables-lock\") pod \"kube-proxy-9xsxb\" (UID: \"7ba52c99-18d4-42d5-960e-e1d8aa028c8c\") " pod="kube-system/kube-proxy-9xsxb" Jun 25 14:17:04.016894 kubelet[3044]: I0625 14:17:04.016803 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvlsd\" (UniqueName: \"kubernetes.io/projected/7ba52c99-18d4-42d5-960e-e1d8aa028c8c-kube-api-access-jvlsd\") pod \"kube-proxy-9xsxb\" (UID: \"7ba52c99-18d4-42d5-960e-e1d8aa028c8c\") " pod="kube-system/kube-proxy-9xsxb" Jun 25 14:17:04.183104 kubelet[3044]: I0625 14:17:04.183035 3044 topology_manager.go:215] "Topology Admit Handler" podUID="a4af09ce-0b30-4c42-a2c9-1a4ebfda5e53" podNamespace="tigera-operator" podName="tigera-operator-76ff79f7fd-27hvv" Jun 25 14:17:04.195928 systemd[1]: Created slice kubepods-besteffort-poda4af09ce_0b30_4c42_a2c9_1a4ebfda5e53.slice - libcontainer container kubepods-besteffort-poda4af09ce_0b30_4c42_a2c9_1a4ebfda5e53.slice. Jun 25 14:17:04.196926 kubelet[3044]: W0625 14:17:04.196876 3044 reflector.go:547] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-245" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-16-245' and this object Jun 25 14:17:04.197057 kubelet[3044]: E0625 14:17:04.196937 3044 reflector.go:150] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-245" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-16-245' and this object Jun 25 14:17:04.198261 kubelet[3044]: W0625 14:17:04.197402 3044 reflector.go:547] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-16-245" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-16-245' and this object Jun 25 14:17:04.198261 kubelet[3044]: E0625 14:17:04.197456 3044 reflector.go:150] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-16-245" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-16-245' and this object Jun 25 14:17:04.218165 kubelet[3044]: I0625 14:17:04.218082 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brbj5\" (UniqueName: \"kubernetes.io/projected/a4af09ce-0b30-4c42-a2c9-1a4ebfda5e53-kube-api-access-brbj5\") pod \"tigera-operator-76ff79f7fd-27hvv\" (UID: \"a4af09ce-0b30-4c42-a2c9-1a4ebfda5e53\") " pod="tigera-operator/tigera-operator-76ff79f7fd-27hvv" Jun 25 14:17:04.218444 kubelet[3044]: I0625 14:17:04.218399 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a4af09ce-0b30-4c42-a2c9-1a4ebfda5e53-var-lib-calico\") pod \"tigera-operator-76ff79f7fd-27hvv\" (UID: \"a4af09ce-0b30-4c42-a2c9-1a4ebfda5e53\") " pod="tigera-operator/tigera-operator-76ff79f7fd-27hvv" Jun 25 14:17:04.273749 containerd[1804]: time="2024-06-25T14:17:04.273688306Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xsxb,Uid:7ba52c99-18d4-42d5-960e-e1d8aa028c8c,Namespace:kube-system,Attempt:0,}" Jun 25 14:17:04.317199 containerd[1804]: time="2024-06-25T14:17:04.317044345Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:17:04.317733 containerd[1804]: time="2024-06-25T14:17:04.317559124Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:04.317733 containerd[1804]: time="2024-06-25T14:17:04.317618908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:17:04.321378 containerd[1804]: time="2024-06-25T14:17:04.320926074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:04.363068 systemd[1]: Started cri-containerd-dd5351209891066a6ae99d1f55d81fc69db37812efeb6850e5bbc935d9431922.scope - libcontainer container dd5351209891066a6ae99d1f55d81fc69db37812efeb6850e5bbc935d9431922. Jun 25 14:17:04.383000 audit: BPF prog-id=105 op=LOAD Jun 25 14:17:04.384000 audit: BPF prog-id=106 op=LOAD Jun 25 14:17:04.384000 audit[3142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3133 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464353335313230393839313036366136616539396431663535643831 Jun 25 14:17:04.384000 audit: BPF prog-id=107 op=LOAD Jun 25 14:17:04.384000 audit[3142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3133 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464353335313230393839313036366136616539396431663535643831 Jun 25 14:17:04.384000 audit: BPF prog-id=107 op=UNLOAD Jun 25 14:17:04.384000 audit: BPF prog-id=106 op=UNLOAD Jun 25 14:17:04.384000 audit: BPF prog-id=108 op=LOAD Jun 25 14:17:04.384000 audit[3142]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3133 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.384000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464353335313230393839313036366136616539396431663535643831 Jun 25 14:17:04.409222 containerd[1804]: time="2024-06-25T14:17:04.409077631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9xsxb,Uid:7ba52c99-18d4-42d5-960e-e1d8aa028c8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"dd5351209891066a6ae99d1f55d81fc69db37812efeb6850e5bbc935d9431922\"" Jun 25 14:17:04.415386 containerd[1804]: time="2024-06-25T14:17:04.415041500Z" level=info msg="CreateContainer within sandbox \"dd5351209891066a6ae99d1f55d81fc69db37812efeb6850e5bbc935d9431922\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 14:17:04.447639 containerd[1804]: time="2024-06-25T14:17:04.447501733Z" level=info msg="CreateContainer within sandbox \"dd5351209891066a6ae99d1f55d81fc69db37812efeb6850e5bbc935d9431922\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"c5d880254224c771bb4a003357601d283d50dfc7d024b54479797c748ca35985\"" Jun 25 14:17:04.450549 containerd[1804]: time="2024-06-25T14:17:04.449736179Z" level=info msg="StartContainer for \"c5d880254224c771bb4a003357601d283d50dfc7d024b54479797c748ca35985\"" Jun 25 14:17:04.493021 systemd[1]: Started cri-containerd-c5d880254224c771bb4a003357601d283d50dfc7d024b54479797c748ca35985.scope - libcontainer container c5d880254224c771bb4a003357601d283d50dfc7d024b54479797c748ca35985. Jun 25 14:17:04.516000 audit: BPF prog-id=109 op=LOAD Jun 25 14:17:04.516000 audit[3175]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001338b0 a2=78 a3=0 items=0 ppid=3133 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.516000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335643838303235343232346337373162623461303033333537363031 Jun 25 14:17:04.517000 audit: BPF prog-id=110 op=LOAD Jun 25 14:17:04.517000 audit[3175]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000133640 a2=78 a3=0 items=0 ppid=3133 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.517000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335643838303235343232346337373162623461303033333537363031 Jun 25 14:17:04.518000 audit: BPF prog-id=110 op=UNLOAD Jun 25 14:17:04.518000 audit: BPF prog-id=109 op=UNLOAD Jun 25 14:17:04.519000 audit: BPF prog-id=111 op=LOAD Jun 25 14:17:04.519000 audit[3175]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000133b10 a2=78 a3=0 items=0 ppid=3133 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.519000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6335643838303235343232346337373162623461303033333537363031 Jun 25 14:17:04.545889 containerd[1804]: time="2024-06-25T14:17:04.545717097Z" level=info msg="StartContainer for \"c5d880254224c771bb4a003357601d283d50dfc7d024b54479797c748ca35985\" returns successfully" Jun 25 14:17:04.681000 audit[3227]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=3227 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:04.681000 audit[3227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc02708d0 a2=0 a3=1 items=0 ppid=3185 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:17:04.687000 audit[3228]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=3228 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:04.687000 audit[3228]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0d16360 a2=0 a3=1 items=0 ppid=3185 pid=3228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.687000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:17:04.689000 audit[3229]: NETFILTER_CFG table=mangle:40 family=2 entries=1 op=nft_register_chain pid=3229 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.689000 audit[3229]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffef1bdfb0 a2=0 a3=1 items=0 ppid=3185 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.689000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:17:04.695000 audit[3231]: NETFILTER_CFG table=nat:41 family=2 entries=1 op=nft_register_chain pid=3231 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.695000 audit[3231]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd56f5650 a2=0 a3=1 items=0 ppid=3185 pid=3231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.695000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:17:04.696000 audit[3230]: NETFILTER_CFG table=filter:42 family=10 entries=1 op=nft_register_chain pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:04.696000 audit[3230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe65026e0 a2=0 a3=1 items=0 ppid=3185 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.696000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:17:04.700000 audit[3232]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=3232 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.700000 audit[3232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdfa4af90 a2=0 a3=1 items=0 ppid=3185 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.700000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:17:04.787000 audit[3233]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3233 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.787000 audit[3233]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff857fe30 a2=0 a3=1 items=0 ppid=3185 pid=3233 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.787000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:17:04.796000 audit[3235]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.796000 audit[3235]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc79b4cf0 a2=0 a3=1 items=0 ppid=3185 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 14:17:04.806000 audit[3238]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.806000 audit[3238]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc63a4240 a2=0 a3=1 items=0 ppid=3185 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 14:17:04.809000 audit[3239]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3239 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.809000 audit[3239]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf828570 a2=0 a3=1 items=0 ppid=3185 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.809000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:17:04.815000 audit[3241]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3241 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.815000 audit[3241]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff54fa480 a2=0 a3=1 items=0 ppid=3185 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.815000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:17:04.819000 audit[3242]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3242 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.819000 audit[3242]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc678bf30 a2=0 a3=1 items=0 ppid=3185 pid=3242 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.819000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:17:04.826000 audit[3244]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.826000 audit[3244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe97c8ed0 a2=0 a3=1 items=0 ppid=3185 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.826000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:17:04.836000 audit[3247]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3247 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.836000 audit[3247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffccea79d0 a2=0 a3=1 items=0 ppid=3185 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.836000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 14:17:04.839000 audit[3248]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3248 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.839000 audit[3248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd9e9350 a2=0 a3=1 items=0 ppid=3185 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.839000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:17:04.846000 audit[3250]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3250 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.846000 audit[3250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffebf154a0 a2=0 a3=1 items=0 ppid=3185 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.846000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:17:04.850000 audit[3251]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3251 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.850000 audit[3251]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca896600 a2=0 a3=1 items=0 ppid=3185 pid=3251 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.850000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:17:04.860000 audit[3253]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.860000 audit[3253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffec8ef00 a2=0 a3=1 items=0 ppid=3185 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.860000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:17:04.869000 audit[3256]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.869000 audit[3256]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe4267600 a2=0 a3=1 items=0 ppid=3185 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.869000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:17:04.881000 audit[3259]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3259 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.881000 audit[3259]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd636f040 a2=0 a3=1 items=0 ppid=3185 pid=3259 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.881000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:17:04.884000 audit[3260]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3260 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.884000 audit[3260]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcfbe8fa0 a2=0 a3=1 items=0 ppid=3185 pid=3260 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:17:04.893000 audit[3262]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3262 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.893000 audit[3262]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffca77ec90 a2=0 a3=1 items=0 ppid=3185 pid=3262 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.893000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:17:04.907000 audit[3265]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3265 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.907000 audit[3265]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe93277d0 a2=0 a3=1 items=0 ppid=3185 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.907000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:17:04.910000 audit[3266]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3266 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.910000 audit[3266]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9d711a0 a2=0 a3=1 items=0 ppid=3185 pid=3266 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.910000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:17:04.918000 audit[3268]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3268 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:04.918000 audit[3268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff9a29530 a2=0 a3=1 items=0 ppid=3185 pid=3268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.918000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:17:04.956000 audit[3274]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:04.956000 audit[3274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=fffffd787220 a2=0 a3=1 items=0 ppid=3185 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.956000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:04.977000 audit[3274]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3274 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:04.977000 audit[3274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffffd787220 a2=0 a3=1 items=0 ppid=3185 pid=3274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.977000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:04.980000 audit[3280]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3280 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:04.980000 audit[3280]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff54cc280 a2=0 a3=1 items=0 ppid=3185 pid=3280 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.980000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:17:04.986000 audit[3282]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3282 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:04.986000 audit[3282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff4ab0010 a2=0 a3=1 items=0 ppid=3185 pid=3282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 14:17:04.996000 audit[3285]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3285 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:04.996000 audit[3285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffffcf22940 a2=0 a3=1 items=0 ppid=3185 pid=3285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.996000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 14:17:04.999000 audit[3286]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3286 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:04.999000 audit[3286]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcb4a9040 a2=0 a3=1 items=0 ppid=3185 pid=3286 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:04.999000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:17:05.005000 audit[3288]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3288 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.005000 audit[3288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe18170e0 a2=0 a3=1 items=0 ppid=3185 pid=3288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.005000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:17:05.008000 audit[3289]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3289 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.008000 audit[3289]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce052be0 a2=0 a3=1 items=0 ppid=3185 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.008000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:17:05.014000 audit[3291]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3291 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.014000 audit[3291]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff892d6d0 a2=0 a3=1 items=0 ppid=3185 pid=3291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.014000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 14:17:05.024000 audit[3294]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3294 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.024000 audit[3294]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffd3d5d0d0 a2=0 a3=1 items=0 ppid=3185 pid=3294 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.024000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:17:05.027000 audit[3295]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3295 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.027000 audit[3295]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff31772b0 a2=0 a3=1 items=0 ppid=3185 pid=3295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.027000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:17:05.032000 audit[3297]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3297 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.032000 audit[3297]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff384dd90 a2=0 a3=1 items=0 ppid=3185 pid=3297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.032000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:17:05.035000 audit[3298]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3298 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.035000 audit[3298]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc40d8c80 a2=0 a3=1 items=0 ppid=3185 pid=3298 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.035000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:17:05.041000 audit[3300]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3300 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.041000 audit[3300]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe6a2da90 a2=0 a3=1 items=0 ppid=3185 pid=3300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.041000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:17:05.050000 audit[3303]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3303 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.050000 audit[3303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcb08f6f0 a2=0 a3=1 items=0 ppid=3185 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.050000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:17:05.058000 audit[3306]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.058000 audit[3306]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc780a410 a2=0 a3=1 items=0 ppid=3185 pid=3306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 14:17:05.061000 audit[3307]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.061000 audit[3307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff7a96360 a2=0 a3=1 items=0 ppid=3185 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.061000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:17:05.068000 audit[3309]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3309 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.068000 audit[3309]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffc1994ca0 a2=0 a3=1 items=0 ppid=3185 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.068000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:17:05.076000 audit[3312]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3312 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.076000 audit[3312]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=fffffb19e850 a2=0 a3=1 items=0 ppid=3185 pid=3312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.076000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:17:05.079000 audit[3313]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3313 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.079000 audit[3313]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc80351a0 a2=0 a3=1 items=0 ppid=3185 pid=3313 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.079000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:17:05.085000 audit[3315]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3315 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.085000 audit[3315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff24474d0 a2=0 a3=1 items=0 ppid=3185 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.085000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:17:05.088000 audit[3316]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3316 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.088000 audit[3316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf6d9da0 a2=0 a3=1 items=0 ppid=3185 pid=3316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.088000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:17:05.093000 audit[3318]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3318 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.093000 audit[3318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff6e39700 a2=0 a3=1 items=0 ppid=3185 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.093000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:17:05.101000 audit[3321]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3321 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:05.101000 audit[3321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd9482d80 a2=0 a3=1 items=0 ppid=3185 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.101000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:17:05.108000 audit[3323]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3323 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:17:05.108000 audit[3323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=ffffca98f520 a2=0 a3=1 items=0 ppid=3185 pid=3323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.108000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:05.109000 audit[3323]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3323 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:17:05.109000 audit[3323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffca98f520 a2=0 a3=1 items=0 ppid=3185 pid=3323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:05.109000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:05.161875 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount748419252.mount: Deactivated successfully. Jun 25 14:17:05.329976 kubelet[3044]: E0625 14:17:05.329786 3044 projected.go:294] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jun 25 14:17:05.329976 kubelet[3044]: E0625 14:17:05.329860 3044 projected.go:200] Error preparing data for projected volume kube-api-access-brbj5 for pod tigera-operator/tigera-operator-76ff79f7fd-27hvv: failed to sync configmap cache: timed out waiting for the condition Jun 25 14:17:05.330784 kubelet[3044]: E0625 14:17:05.330002 3044 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a4af09ce-0b30-4c42-a2c9-1a4ebfda5e53-kube-api-access-brbj5 podName:a4af09ce-0b30-4c42-a2c9-1a4ebfda5e53 nodeName:}" failed. No retries permitted until 2024-06-25 14:17:05.829946779 +0000 UTC m=+16.368122389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-brbj5" (UniqueName: "kubernetes.io/projected/a4af09ce-0b30-4c42-a2c9-1a4ebfda5e53-kube-api-access-brbj5") pod "tigera-operator-76ff79f7fd-27hvv" (UID: "a4af09ce-0b30-4c42-a2c9-1a4ebfda5e53") : failed to sync configmap cache: timed out waiting for the condition Jun 25 14:17:06.003353 containerd[1804]: time="2024-06-25T14:17:06.003292116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-27hvv,Uid:a4af09ce-0b30-4c42-a2c9-1a4ebfda5e53,Namespace:tigera-operator,Attempt:0,}" Jun 25 14:17:06.048267 containerd[1804]: time="2024-06-25T14:17:06.048100557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:17:06.048267 containerd[1804]: time="2024-06-25T14:17:06.048206866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:06.048740 containerd[1804]: time="2024-06-25T14:17:06.048618623Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:17:06.048740 containerd[1804]: time="2024-06-25T14:17:06.048704856Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:06.085014 systemd[1]: Started cri-containerd-18d471f34dfd43d1f794a95c317d23eb51474ca481f667968daff4af864fb295.scope - libcontainer container 18d471f34dfd43d1f794a95c317d23eb51474ca481f667968daff4af864fb295. Jun 25 14:17:06.109000 audit: BPF prog-id=112 op=LOAD Jun 25 14:17:06.110000 audit: BPF prog-id=113 op=LOAD Jun 25 14:17:06.110000 audit[3345]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3334 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:06.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138643437316633346466643433643166373934613935633331376432 Jun 25 14:17:06.110000 audit: BPF prog-id=114 op=LOAD Jun 25 14:17:06.110000 audit[3345]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3334 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:06.110000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138643437316633346466643433643166373934613935633331376432 Jun 25 14:17:06.111000 audit: BPF prog-id=114 op=UNLOAD Jun 25 14:17:06.111000 audit: BPF prog-id=113 op=UNLOAD Jun 25 14:17:06.111000 audit: BPF prog-id=115 op=LOAD Jun 25 14:17:06.111000 audit[3345]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3334 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:06.111000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3138643437316633346466643433643166373934613935633331376432 Jun 25 14:17:06.156246 containerd[1804]: time="2024-06-25T14:17:06.156188777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76ff79f7fd-27hvv,Uid:a4af09ce-0b30-4c42-a2c9-1a4ebfda5e53,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"18d471f34dfd43d1f794a95c317d23eb51474ca481f667968daff4af864fb295\"" Jun 25 14:17:06.166156 containerd[1804]: time="2024-06-25T14:17:06.166086872Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 14:17:07.463335 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount240864639.mount: Deactivated successfully. Jun 25 14:17:08.369840 containerd[1804]: time="2024-06-25T14:17:08.369784100Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:08.372154 containerd[1804]: time="2024-06-25T14:17:08.372102389Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473610" Jun 25 14:17:08.372920 containerd[1804]: time="2024-06-25T14:17:08.372878924Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:08.376094 containerd[1804]: time="2024-06-25T14:17:08.376033700Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:08.379003 containerd[1804]: time="2024-06-25T14:17:08.378952674Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:08.380741 containerd[1804]: time="2024-06-25T14:17:08.380647693Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.214307932s" Jun 25 14:17:08.380898 containerd[1804]: time="2024-06-25T14:17:08.380739505Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jun 25 14:17:08.386190 containerd[1804]: time="2024-06-25T14:17:08.385537963Z" level=info msg="CreateContainer within sandbox \"18d471f34dfd43d1f794a95c317d23eb51474ca481f667968daff4af864fb295\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 14:17:08.407092 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4270524183.mount: Deactivated successfully. Jun 25 14:17:08.424193 containerd[1804]: time="2024-06-25T14:17:08.424132590Z" level=info msg="CreateContainer within sandbox \"18d471f34dfd43d1f794a95c317d23eb51474ca481f667968daff4af864fb295\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"596b797d3be91e9d33beaf0db509e04844ef25980c78fabd7131dfa0f990e369\"" Jun 25 14:17:08.426124 containerd[1804]: time="2024-06-25T14:17:08.425109682Z" level=info msg="StartContainer for \"596b797d3be91e9d33beaf0db509e04844ef25980c78fabd7131dfa0f990e369\"" Jun 25 14:17:08.478064 systemd[1]: Started cri-containerd-596b797d3be91e9d33beaf0db509e04844ef25980c78fabd7131dfa0f990e369.scope - libcontainer container 596b797d3be91e9d33beaf0db509e04844ef25980c78fabd7131dfa0f990e369. Jun 25 14:17:08.481302 systemd[1]: run-containerd-runc-k8s.io-596b797d3be91e9d33beaf0db509e04844ef25980c78fabd7131dfa0f990e369-runc.rmXdqW.mount: Deactivated successfully. Jun 25 14:17:08.501000 audit: BPF prog-id=116 op=LOAD Jun 25 14:17:08.504037 kernel: kauditd_printk_skb: 190 callbacks suppressed Jun 25 14:17:08.504106 kernel: audit: type=1334 audit(1719325028.501:467): prog-id=116 op=LOAD Jun 25 14:17:08.502000 audit: BPF prog-id=117 op=LOAD Jun 25 14:17:08.506568 kernel: audit: type=1334 audit(1719325028.502:468): prog-id=117 op=LOAD Jun 25 14:17:08.506640 kernel: audit: type=1300 audit(1719325028.502:468): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3334 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:08.502000 audit[3386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3334 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:08.502000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539366237393764336265393165396433336265616630646235303965 Jun 25 14:17:08.516251 kernel: audit: type=1327 audit(1719325028.502:468): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539366237393764336265393165396433336265616630646235303965 Jun 25 14:17:08.516386 kernel: audit: type=1334 audit(1719325028.504:469): prog-id=118 op=LOAD Jun 25 14:17:08.504000 audit: BPF prog-id=118 op=LOAD Jun 25 14:17:08.504000 audit[3386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3334 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:08.522884 kernel: audit: type=1300 audit(1719325028.504:469): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3334 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:08.523036 kernel: audit: type=1327 audit(1719325028.504:469): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539366237393764336265393165396433336265616630646235303965 Jun 25 14:17:08.504000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539366237393764336265393165396433336265616630646235303965 Jun 25 14:17:08.505000 audit: BPF prog-id=118 op=UNLOAD Jun 25 14:17:08.528295 kernel: audit: type=1334 audit(1719325028.505:470): prog-id=118 op=UNLOAD Jun 25 14:17:08.528377 kernel: audit: type=1334 audit(1719325028.505:471): prog-id=117 op=UNLOAD Jun 25 14:17:08.505000 audit: BPF prog-id=117 op=UNLOAD Jun 25 14:17:08.530315 kernel: audit: type=1334 audit(1719325028.505:472): prog-id=119 op=LOAD Jun 25 14:17:08.505000 audit: BPF prog-id=119 op=LOAD Jun 25 14:17:08.505000 audit[3386]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3334 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:08.505000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539366237393764336265393165396433336265616630646235303965 Jun 25 14:17:08.551444 containerd[1804]: time="2024-06-25T14:17:08.551373335Z" level=info msg="StartContainer for \"596b797d3be91e9d33beaf0db509e04844ef25980c78fabd7131dfa0f990e369\" returns successfully" Jun 25 14:17:08.900175 kubelet[3044]: I0625 14:17:08.900103 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9xsxb" podStartSLOduration=5.9000804460000005 podStartE2EDuration="5.900080446s" podCreationTimestamp="2024-06-25 14:17:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:17:04.896268411 +0000 UTC m=+15.434444045" watchObservedRunningTime="2024-06-25 14:17:08.900080446 +0000 UTC m=+19.438256056" Jun 25 14:17:09.794450 kubelet[3044]: I0625 14:17:09.794358 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76ff79f7fd-27hvv" podStartSLOduration=3.57322364 podStartE2EDuration="5.794335526s" podCreationTimestamp="2024-06-25 14:17:04 +0000 UTC" firstStartedPulling="2024-06-25 14:17:06.161830215 +0000 UTC m=+16.700005825" lastFinishedPulling="2024-06-25 14:17:08.382942101 +0000 UTC m=+18.921117711" observedRunningTime="2024-06-25 14:17:08.901986305 +0000 UTC m=+19.440161927" watchObservedRunningTime="2024-06-25 14:17:09.794335526 +0000 UTC m=+20.332511148" Jun 25 14:17:13.160000 audit[3423]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=3423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:13.160000 audit[3423]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff02ce1d0 a2=0 a3=1 items=0 ppid=3185 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:13.160000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:13.161000 audit[3423]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3423 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:13.161000 audit[3423]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff02ce1d0 a2=0 a3=1 items=0 ppid=3185 pid=3423 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:13.161000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:13.178000 audit[3425]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=3425 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:13.178000 audit[3425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffebf8f7c0 a2=0 a3=1 items=0 ppid=3185 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:13.178000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:13.179000 audit[3425]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3425 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:13.179000 audit[3425]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffebf8f7c0 a2=0 a3=1 items=0 ppid=3185 pid=3425 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:13.179000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:13.237685 kubelet[3044]: I0625 14:17:13.237606 3044 topology_manager.go:215] "Topology Admit Handler" podUID="8590e283-3efb-445e-85c7-bf99fc5ad955" podNamespace="calico-system" podName="calico-typha-d74cf677b-bpb5t" Jun 25 14:17:13.263537 systemd[1]: Created slice kubepods-besteffort-pod8590e283_3efb_445e_85c7_bf99fc5ad955.slice - libcontainer container kubepods-besteffort-pod8590e283_3efb_445e_85c7_bf99fc5ad955.slice. Jun 25 14:17:13.281798 kubelet[3044]: I0625 14:17:13.281739 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8590e283-3efb-445e-85c7-bf99fc5ad955-tigera-ca-bundle\") pod \"calico-typha-d74cf677b-bpb5t\" (UID: \"8590e283-3efb-445e-85c7-bf99fc5ad955\") " pod="calico-system/calico-typha-d74cf677b-bpb5t" Jun 25 14:17:13.281990 kubelet[3044]: I0625 14:17:13.281811 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8590e283-3efb-445e-85c7-bf99fc5ad955-typha-certs\") pod \"calico-typha-d74cf677b-bpb5t\" (UID: \"8590e283-3efb-445e-85c7-bf99fc5ad955\") " pod="calico-system/calico-typha-d74cf677b-bpb5t" Jun 25 14:17:13.281990 kubelet[3044]: I0625 14:17:13.281857 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j549n\" (UniqueName: \"kubernetes.io/projected/8590e283-3efb-445e-85c7-bf99fc5ad955-kube-api-access-j549n\") pod \"calico-typha-d74cf677b-bpb5t\" (UID: \"8590e283-3efb-445e-85c7-bf99fc5ad955\") " pod="calico-system/calico-typha-d74cf677b-bpb5t" Jun 25 14:17:13.440315 kubelet[3044]: I0625 14:17:13.440096 3044 topology_manager.go:215] "Topology Admit Handler" podUID="c2076040-bd78-4cbd-9fce-5b79ad4e78d4" podNamespace="calico-system" podName="calico-node-829q5" Jun 25 14:17:13.452728 systemd[1]: Created slice kubepods-besteffort-podc2076040_bd78_4cbd_9fce_5b79ad4e78d4.slice - libcontainer container kubepods-besteffort-podc2076040_bd78_4cbd_9fce_5b79ad4e78d4.slice. Jun 25 14:17:13.483877 kubelet[3044]: I0625 14:17:13.483830 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c2076040-bd78-4cbd-9fce-5b79ad4e78d4-cni-log-dir\") pod \"calico-node-829q5\" (UID: \"c2076040-bd78-4cbd-9fce-5b79ad4e78d4\") " pod="calico-system/calico-node-829q5" Jun 25 14:17:13.484230 kubelet[3044]: I0625 14:17:13.484198 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c2076040-bd78-4cbd-9fce-5b79ad4e78d4-flexvol-driver-host\") pod \"calico-node-829q5\" (UID: \"c2076040-bd78-4cbd-9fce-5b79ad4e78d4\") " pod="calico-system/calico-node-829q5" Jun 25 14:17:13.484397 kubelet[3044]: I0625 14:17:13.484370 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c2076040-bd78-4cbd-9fce-5b79ad4e78d4-policysync\") pod \"calico-node-829q5\" (UID: \"c2076040-bd78-4cbd-9fce-5b79ad4e78d4\") " pod="calico-system/calico-node-829q5" Jun 25 14:17:13.484558 kubelet[3044]: I0625 14:17:13.484532 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c2076040-bd78-4cbd-9fce-5b79ad4e78d4-node-certs\") pod \"calico-node-829q5\" (UID: \"c2076040-bd78-4cbd-9fce-5b79ad4e78d4\") " pod="calico-system/calico-node-829q5" Jun 25 14:17:13.484829 kubelet[3044]: I0625 14:17:13.484788 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csbn7\" (UniqueName: \"kubernetes.io/projected/c2076040-bd78-4cbd-9fce-5b79ad4e78d4-kube-api-access-csbn7\") pod \"calico-node-829q5\" (UID: \"c2076040-bd78-4cbd-9fce-5b79ad4e78d4\") " pod="calico-system/calico-node-829q5" Jun 25 14:17:13.485090 kubelet[3044]: I0625 14:17:13.485061 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c2076040-bd78-4cbd-9fce-5b79ad4e78d4-var-lib-calico\") pod \"calico-node-829q5\" (UID: \"c2076040-bd78-4cbd-9fce-5b79ad4e78d4\") " pod="calico-system/calico-node-829q5" Jun 25 14:17:13.485330 kubelet[3044]: I0625 14:17:13.485292 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c2076040-bd78-4cbd-9fce-5b79ad4e78d4-tigera-ca-bundle\") pod \"calico-node-829q5\" (UID: \"c2076040-bd78-4cbd-9fce-5b79ad4e78d4\") " pod="calico-system/calico-node-829q5" Jun 25 14:17:13.485552 kubelet[3044]: I0625 14:17:13.485514 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c2076040-bd78-4cbd-9fce-5b79ad4e78d4-cni-bin-dir\") pod \"calico-node-829q5\" (UID: \"c2076040-bd78-4cbd-9fce-5b79ad4e78d4\") " pod="calico-system/calico-node-829q5" Jun 25 14:17:13.485784 kubelet[3044]: I0625 14:17:13.485756 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2076040-bd78-4cbd-9fce-5b79ad4e78d4-xtables-lock\") pod \"calico-node-829q5\" (UID: \"c2076040-bd78-4cbd-9fce-5b79ad4e78d4\") " pod="calico-system/calico-node-829q5" Jun 25 14:17:13.486035 kubelet[3044]: I0625 14:17:13.486006 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c2076040-bd78-4cbd-9fce-5b79ad4e78d4-cni-net-dir\") pod \"calico-node-829q5\" (UID: \"c2076040-bd78-4cbd-9fce-5b79ad4e78d4\") " pod="calico-system/calico-node-829q5" Jun 25 14:17:13.486653 kubelet[3044]: I0625 14:17:13.486597 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2076040-bd78-4cbd-9fce-5b79ad4e78d4-lib-modules\") pod \"calico-node-829q5\" (UID: \"c2076040-bd78-4cbd-9fce-5b79ad4e78d4\") " pod="calico-system/calico-node-829q5" Jun 25 14:17:13.487252 kubelet[3044]: I0625 14:17:13.487213 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c2076040-bd78-4cbd-9fce-5b79ad4e78d4-var-run-calico\") pod \"calico-node-829q5\" (UID: \"c2076040-bd78-4cbd-9fce-5b79ad4e78d4\") " pod="calico-system/calico-node-829q5" Jun 25 14:17:13.578755 kubelet[3044]: I0625 14:17:13.578633 3044 topology_manager.go:215] "Topology Admit Handler" podUID="f0eb91a0-b29d-4d99-bc83-5df8975b23bb" podNamespace="calico-system" podName="csi-node-driver-6sfl8" Jun 25 14:17:13.581309 containerd[1804]: time="2024-06-25T14:17:13.581226954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d74cf677b-bpb5t,Uid:8590e283-3efb-445e-85c7-bf99fc5ad955,Namespace:calico-system,Attempt:0,}" Jun 25 14:17:13.582409 kubelet[3044]: E0625 14:17:13.579629 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6sfl8" podUID="f0eb91a0-b29d-4d99-bc83-5df8975b23bb" Jun 25 14:17:13.588775 kubelet[3044]: I0625 14:17:13.588692 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f0eb91a0-b29d-4d99-bc83-5df8975b23bb-varrun\") pod \"csi-node-driver-6sfl8\" (UID: \"f0eb91a0-b29d-4d99-bc83-5df8975b23bb\") " pod="calico-system/csi-node-driver-6sfl8" Jun 25 14:17:13.588947 kubelet[3044]: I0625 14:17:13.588909 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f0eb91a0-b29d-4d99-bc83-5df8975b23bb-socket-dir\") pod \"csi-node-driver-6sfl8\" (UID: \"f0eb91a0-b29d-4d99-bc83-5df8975b23bb\") " pod="calico-system/csi-node-driver-6sfl8" Jun 25 14:17:13.589110 kubelet[3044]: I0625 14:17:13.589024 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f0eb91a0-b29d-4d99-bc83-5df8975b23bb-registration-dir\") pod \"csi-node-driver-6sfl8\" (UID: \"f0eb91a0-b29d-4d99-bc83-5df8975b23bb\") " pod="calico-system/csi-node-driver-6sfl8" Jun 25 14:17:13.589205 kubelet[3044]: I0625 14:17:13.589143 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lf8mn\" (UniqueName: \"kubernetes.io/projected/f0eb91a0-b29d-4d99-bc83-5df8975b23bb-kube-api-access-lf8mn\") pod \"csi-node-driver-6sfl8\" (UID: \"f0eb91a0-b29d-4d99-bc83-5df8975b23bb\") " pod="calico-system/csi-node-driver-6sfl8" Jun 25 14:17:13.589478 kubelet[3044]: I0625 14:17:13.589432 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f0eb91a0-b29d-4d99-bc83-5df8975b23bb-kubelet-dir\") pod \"csi-node-driver-6sfl8\" (UID: \"f0eb91a0-b29d-4d99-bc83-5df8975b23bb\") " pod="calico-system/csi-node-driver-6sfl8" Jun 25 14:17:13.609535 kubelet[3044]: E0625 14:17:13.609175 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.609535 kubelet[3044]: W0625 14:17:13.609213 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.609535 kubelet[3044]: E0625 14:17:13.609264 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.638324 kubelet[3044]: E0625 14:17:13.638274 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.638324 kubelet[3044]: W0625 14:17:13.638312 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.638549 kubelet[3044]: E0625 14:17:13.638346 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.675996 containerd[1804]: time="2024-06-25T14:17:13.658005866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:17:13.675996 containerd[1804]: time="2024-06-25T14:17:13.658111119Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:13.675996 containerd[1804]: time="2024-06-25T14:17:13.658179543Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:17:13.675996 containerd[1804]: time="2024-06-25T14:17:13.658214763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:13.676351 kubelet[3044]: E0625 14:17:13.671449 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.676351 kubelet[3044]: W0625 14:17:13.671478 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.676351 kubelet[3044]: E0625 14:17:13.671508 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.692899 kubelet[3044]: E0625 14:17:13.692746 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.692899 kubelet[3044]: W0625 14:17:13.692787 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.692899 kubelet[3044]: E0625 14:17:13.692840 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.694883 kubelet[3044]: E0625 14:17:13.694837 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.694883 kubelet[3044]: W0625 14:17:13.694873 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.695113 kubelet[3044]: E0625 14:17:13.694913 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.695431 kubelet[3044]: E0625 14:17:13.695390 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.695431 kubelet[3044]: W0625 14:17:13.695421 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.695616 kubelet[3044]: E0625 14:17:13.695459 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.696035 kubelet[3044]: E0625 14:17:13.695982 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.696035 kubelet[3044]: W0625 14:17:13.696014 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.696210 kubelet[3044]: E0625 14:17:13.696182 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.696588 kubelet[3044]: E0625 14:17:13.696549 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.696588 kubelet[3044]: W0625 14:17:13.696580 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.696805 kubelet[3044]: E0625 14:17:13.696764 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.697088 kubelet[3044]: E0625 14:17:13.697052 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.697088 kubelet[3044]: W0625 14:17:13.697081 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.697257 kubelet[3044]: E0625 14:17:13.697242 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.697551 kubelet[3044]: E0625 14:17:13.697515 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.697551 kubelet[3044]: W0625 14:17:13.697543 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.697753 kubelet[3044]: E0625 14:17:13.697726 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.698037 kubelet[3044]: E0625 14:17:13.698002 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.698037 kubelet[3044]: W0625 14:17:13.698029 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.698226 kubelet[3044]: E0625 14:17:13.698208 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.698630 kubelet[3044]: E0625 14:17:13.698592 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.698630 kubelet[3044]: W0625 14:17:13.698621 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.700964 kubelet[3044]: E0625 14:17:13.700879 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.701530 kubelet[3044]: E0625 14:17:13.701481 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.701530 kubelet[3044]: W0625 14:17:13.701518 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.701854 kubelet[3044]: E0625 14:17:13.701812 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.702197 kubelet[3044]: E0625 14:17:13.702156 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.702197 kubelet[3044]: W0625 14:17:13.702188 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.702410 kubelet[3044]: E0625 14:17:13.702369 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.702722 kubelet[3044]: E0625 14:17:13.702655 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.702808 kubelet[3044]: W0625 14:17:13.702734 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.702919 kubelet[3044]: E0625 14:17:13.702882 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.703176 kubelet[3044]: E0625 14:17:13.703140 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.703176 kubelet[3044]: W0625 14:17:13.703167 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.703363 kubelet[3044]: E0625 14:17:13.703316 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.705853 kubelet[3044]: E0625 14:17:13.705797 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.705853 kubelet[3044]: W0625 14:17:13.705839 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.706085 kubelet[3044]: E0625 14:17:13.706031 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.706375 kubelet[3044]: E0625 14:17:13.706333 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.706375 kubelet[3044]: W0625 14:17:13.706364 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.706558 kubelet[3044]: E0625 14:17:13.706522 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.706855 kubelet[3044]: E0625 14:17:13.706813 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.706855 kubelet[3044]: W0625 14:17:13.706840 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.707041 kubelet[3044]: E0625 14:17:13.707004 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.707459 kubelet[3044]: E0625 14:17:13.707419 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.707459 kubelet[3044]: W0625 14:17:13.707448 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.707636 kubelet[3044]: E0625 14:17:13.707611 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.707917 kubelet[3044]: E0625 14:17:13.707879 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.707917 kubelet[3044]: W0625 14:17:13.707906 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.708072 kubelet[3044]: E0625 14:17:13.708058 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.710826 kubelet[3044]: E0625 14:17:13.710772 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.710826 kubelet[3044]: W0625 14:17:13.710812 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.711087 kubelet[3044]: E0625 14:17:13.711047 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.711405 kubelet[3044]: E0625 14:17:13.711363 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.711405 kubelet[3044]: W0625 14:17:13.711394 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.711636 kubelet[3044]: E0625 14:17:13.711596 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.711951 kubelet[3044]: E0625 14:17:13.711910 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.711951 kubelet[3044]: W0625 14:17:13.711943 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.712179 kubelet[3044]: E0625 14:17:13.712137 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.712412 kubelet[3044]: E0625 14:17:13.712376 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.712412 kubelet[3044]: W0625 14:17:13.712405 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.712615 kubelet[3044]: E0625 14:17:13.712559 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.712891 kubelet[3044]: E0625 14:17:13.712845 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.712891 kubelet[3044]: W0625 14:17:13.712875 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.714786 kubelet[3044]: E0625 14:17:13.713068 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.716445 kubelet[3044]: E0625 14:17:13.716146 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.716445 kubelet[3044]: W0625 14:17:13.716173 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.716445 kubelet[3044]: E0625 14:17:13.716301 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.717470 kubelet[3044]: E0625 14:17:13.716994 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.717470 kubelet[3044]: W0625 14:17:13.717029 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.717470 kubelet[3044]: E0625 14:17:13.717060 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.732218 systemd[1]: Started cri-containerd-3fa5653125496a6f92947a077ced5b391084274664302756a546593060b8c4a6.scope - libcontainer container 3fa5653125496a6f92947a077ced5b391084274664302756a546593060b8c4a6. Jun 25 14:17:13.761998 containerd[1804]: time="2024-06-25T14:17:13.761926034Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-829q5,Uid:c2076040-bd78-4cbd-9fce-5b79ad4e78d4,Namespace:calico-system,Attempt:0,}" Jun 25 14:17:13.766706 kubelet[3044]: E0625 14:17:13.765077 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:13.766706 kubelet[3044]: W0625 14:17:13.765117 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:13.766706 kubelet[3044]: E0625 14:17:13.765173 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:13.839614 containerd[1804]: time="2024-06-25T14:17:13.839457457Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:17:13.839792 containerd[1804]: time="2024-06-25T14:17:13.839734406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:13.839929 containerd[1804]: time="2024-06-25T14:17:13.839844842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:17:13.840686 containerd[1804]: time="2024-06-25T14:17:13.840586624Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:13.871016 systemd[1]: Started cri-containerd-db8952229c6730f01a983f0b3c23a131d18d48f592f23b82cb632351ab0e01fb.scope - libcontainer container db8952229c6730f01a983f0b3c23a131d18d48f592f23b82cb632351ab0e01fb. Jun 25 14:17:13.876000 audit: BPF prog-id=120 op=LOAD Jun 25 14:17:13.880750 kernel: kauditd_printk_skb: 14 callbacks suppressed Jun 25 14:17:13.880840 kernel: audit: type=1334 audit(1719325033.876:477): prog-id=120 op=LOAD Jun 25 14:17:13.877000 audit: BPF prog-id=121 op=LOAD Jun 25 14:17:13.883133 kernel: audit: type=1334 audit(1719325033.877:478): prog-id=121 op=LOAD Jun 25 14:17:13.883199 kernel: audit: type=1300 audit(1719325033.877:478): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=3439 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:13.877000 audit[3452]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=3439 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:13.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613536353331323534393661366639323934376130373763656435 Jun 25 14:17:13.895748 kernel: audit: type=1327 audit(1719325033.877:478): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613536353331323534393661366639323934376130373763656435 Jun 25 14:17:13.877000 audit: BPF prog-id=122 op=LOAD Jun 25 14:17:13.902160 kernel: audit: type=1334 audit(1719325033.877:479): prog-id=122 op=LOAD Jun 25 14:17:13.877000 audit[3452]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=3439 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:13.909231 kernel: audit: type=1300 audit(1719325033.877:479): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=3439 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:13.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613536353331323534393661366639323934376130373763656435 Jun 25 14:17:13.917519 kernel: audit: type=1327 audit(1719325033.877:479): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613536353331323534393661366639323934376130373763656435 Jun 25 14:17:13.877000 audit: BPF prog-id=122 op=UNLOAD Jun 25 14:17:13.921108 kernel: audit: type=1334 audit(1719325033.877:480): prog-id=122 op=UNLOAD Jun 25 14:17:13.877000 audit: BPF prog-id=121 op=UNLOAD Jun 25 14:17:13.922521 kernel: audit: type=1334 audit(1719325033.877:481): prog-id=121 op=UNLOAD Jun 25 14:17:13.877000 audit: BPF prog-id=123 op=LOAD Jun 25 14:17:13.924201 kernel: audit: type=1334 audit(1719325033.877:482): prog-id=123 op=LOAD Jun 25 14:17:13.877000 audit[3452]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=3439 pid=3452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:13.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3366613536353331323534393661366639323934376130373763656435 Jun 25 14:17:13.955000 audit: BPF prog-id=124 op=LOAD Jun 25 14:17:13.958000 audit: BPF prog-id=125 op=LOAD Jun 25 14:17:13.958000 audit[3515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3502 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:13.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462383935323232396336373330663031613938336630623363323361 Jun 25 14:17:13.958000 audit: BPF prog-id=126 op=LOAD Jun 25 14:17:13.958000 audit[3515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3502 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:13.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462383935323232396336373330663031613938336630623363323361 Jun 25 14:17:13.958000 audit: BPF prog-id=126 op=UNLOAD Jun 25 14:17:13.958000 audit: BPF prog-id=125 op=UNLOAD Jun 25 14:17:13.958000 audit: BPF prog-id=127 op=LOAD Jun 25 14:17:13.958000 audit[3515]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3502 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:13.958000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462383935323232396336373330663031613938336630623363323361 Jun 25 14:17:14.005411 containerd[1804]: time="2024-06-25T14:17:14.005351993Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-d74cf677b-bpb5t,Uid:8590e283-3efb-445e-85c7-bf99fc5ad955,Namespace:calico-system,Attempt:0,} returns sandbox id \"3fa5653125496a6f92947a077ced5b391084274664302756a546593060b8c4a6\"" Jun 25 14:17:14.011895 containerd[1804]: time="2024-06-25T14:17:14.011135867Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 14:17:14.037111 containerd[1804]: time="2024-06-25T14:17:14.036979472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-829q5,Uid:c2076040-bd78-4cbd-9fce-5b79ad4e78d4,Namespace:calico-system,Attempt:0,} returns sandbox id \"db8952229c6730f01a983f0b3c23a131d18d48f592f23b82cb632351ab0e01fb\"" Jun 25 14:17:14.199000 audit[3544]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=3544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:14.199000 audit[3544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffcb6ee7d0 a2=0 a3=1 items=0 ppid=3185 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:14.199000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:14.200000 audit[3544]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3544 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:14.200000 audit[3544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcb6ee7d0 a2=0 a3=1 items=0 ppid=3185 pid=3544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:14.200000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:14.778242 kubelet[3044]: E0625 14:17:14.778108 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6sfl8" podUID="f0eb91a0-b29d-4d99-bc83-5df8975b23bb" Jun 25 14:17:16.778310 kubelet[3044]: E0625 14:17:16.777812 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6sfl8" podUID="f0eb91a0-b29d-4d99-bc83-5df8975b23bb" Jun 25 14:17:16.940075 containerd[1804]: time="2024-06-25T14:17:16.940014774Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:16.941585 containerd[1804]: time="2024-06-25T14:17:16.941520575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jun 25 14:17:16.943413 containerd[1804]: time="2024-06-25T14:17:16.943344784Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:16.946777 containerd[1804]: time="2024-06-25T14:17:16.946712786Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:16.949795 containerd[1804]: time="2024-06-25T14:17:16.949738343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:16.951921 containerd[1804]: time="2024-06-25T14:17:16.951864546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.940652119s" Jun 25 14:17:16.952750 containerd[1804]: time="2024-06-25T14:17:16.952710452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jun 25 14:17:16.968696 containerd[1804]: time="2024-06-25T14:17:16.968599640Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 14:17:16.991397 containerd[1804]: time="2024-06-25T14:17:16.991326304Z" level=info msg="CreateContainer within sandbox \"3fa5653125496a6f92947a077ced5b391084274664302756a546593060b8c4a6\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 14:17:17.022134 containerd[1804]: time="2024-06-25T14:17:17.022038632Z" level=info msg="CreateContainer within sandbox \"3fa5653125496a6f92947a077ced5b391084274664302756a546593060b8c4a6\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9885cf74ced640de31dff628bc426bcd63973fc62d43e39bcde5d8cbc79a15e0\"" Jun 25 14:17:17.023784 containerd[1804]: time="2024-06-25T14:17:17.023714736Z" level=info msg="StartContainer for \"9885cf74ced640de31dff628bc426bcd63973fc62d43e39bcde5d8cbc79a15e0\"" Jun 25 14:17:17.095029 systemd[1]: Started cri-containerd-9885cf74ced640de31dff628bc426bcd63973fc62d43e39bcde5d8cbc79a15e0.scope - libcontainer container 9885cf74ced640de31dff628bc426bcd63973fc62d43e39bcde5d8cbc79a15e0. Jun 25 14:17:17.123000 audit: BPF prog-id=128 op=LOAD Jun 25 14:17:17.124000 audit: BPF prog-id=129 op=LOAD Jun 25 14:17:17.124000 audit[3558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=3439 pid=3558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:17.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938383563663734636564363430646533316466663632386263343236 Jun 25 14:17:17.125000 audit: BPF prog-id=130 op=LOAD Jun 25 14:17:17.125000 audit[3558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=3439 pid=3558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:17.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938383563663734636564363430646533316466663632386263343236 Jun 25 14:17:17.126000 audit: BPF prog-id=130 op=UNLOAD Jun 25 14:17:17.126000 audit: BPF prog-id=129 op=UNLOAD Jun 25 14:17:17.126000 audit: BPF prog-id=131 op=LOAD Jun 25 14:17:17.126000 audit[3558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=3439 pid=3558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:17.126000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938383563663734636564363430646533316466663632386263343236 Jun 25 14:17:17.176073 containerd[1804]: time="2024-06-25T14:17:17.176008109Z" level=info msg="StartContainer for \"9885cf74ced640de31dff628bc426bcd63973fc62d43e39bcde5d8cbc79a15e0\" returns successfully" Jun 25 14:17:17.928027 kubelet[3044]: E0625 14:17:17.927502 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.928027 kubelet[3044]: W0625 14:17:17.927542 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.928027 kubelet[3044]: E0625 14:17:17.927597 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.929341 kubelet[3044]: E0625 14:17:17.928915 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.929341 kubelet[3044]: W0625 14:17:17.928947 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.929341 kubelet[3044]: E0625 14:17:17.928978 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.930121 kubelet[3044]: E0625 14:17:17.929717 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.930121 kubelet[3044]: W0625 14:17:17.929743 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.930121 kubelet[3044]: E0625 14:17:17.929773 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.930892 kubelet[3044]: E0625 14:17:17.930424 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.930892 kubelet[3044]: W0625 14:17:17.930462 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.930892 kubelet[3044]: E0625 14:17:17.930487 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.931513 kubelet[3044]: E0625 14:17:17.931237 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.931513 kubelet[3044]: W0625 14:17:17.931262 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.931513 kubelet[3044]: E0625 14:17:17.931289 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.933470 kubelet[3044]: E0625 14:17:17.933418 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.934176 kubelet[3044]: W0625 14:17:17.933894 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.934176 kubelet[3044]: E0625 14:17:17.933952 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.935023 kubelet[3044]: E0625 14:17:17.934844 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.935938 kubelet[3044]: W0625 14:17:17.935136 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.935938 kubelet[3044]: E0625 14:17:17.935174 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.936295 kubelet[3044]: E0625 14:17:17.936264 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.936431 kubelet[3044]: W0625 14:17:17.936404 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.936580 kubelet[3044]: E0625 14:17:17.936554 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.937127 kubelet[3044]: E0625 14:17:17.937100 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.937298 kubelet[3044]: W0625 14:17:17.937272 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.937457 kubelet[3044]: E0625 14:17:17.937417 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.938222 kubelet[3044]: E0625 14:17:17.938184 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.938408 kubelet[3044]: W0625 14:17:17.938380 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.939134 kubelet[3044]: E0625 14:17:17.938502 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.939286 kubelet[3044]: E0625 14:17:17.938963 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.939413 kubelet[3044]: W0625 14:17:17.939383 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.939543 kubelet[3044]: E0625 14:17:17.939516 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.940701 kubelet[3044]: E0625 14:17:17.940635 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.940893 kubelet[3044]: W0625 14:17:17.940864 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.941056 kubelet[3044]: E0625 14:17:17.941030 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.941597 kubelet[3044]: E0625 14:17:17.941555 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.941827 kubelet[3044]: W0625 14:17:17.941797 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.941981 kubelet[3044]: E0625 14:17:17.941955 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.942541 kubelet[3044]: E0625 14:17:17.942513 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.942765 kubelet[3044]: W0625 14:17:17.942736 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.942932 kubelet[3044]: E0625 14:17:17.942905 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.943382 kubelet[3044]: E0625 14:17:17.943359 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.943516 kubelet[3044]: W0625 14:17:17.943490 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.943692 kubelet[3044]: E0625 14:17:17.943649 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.944925 kubelet[3044]: E0625 14:17:17.944885 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.945131 kubelet[3044]: W0625 14:17:17.945102 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.945315 kubelet[3044]: E0625 14:17:17.945289 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.946045 kubelet[3044]: E0625 14:17:17.946013 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.946294 kubelet[3044]: W0625 14:17:17.946264 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.946432 kubelet[3044]: E0625 14:17:17.946406 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.947012 kubelet[3044]: E0625 14:17:17.946983 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.947346 kubelet[3044]: W0625 14:17:17.947314 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.947490 kubelet[3044]: E0625 14:17:17.947463 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.948386 kubelet[3044]: E0625 14:17:17.948311 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.948593 kubelet[3044]: W0625 14:17:17.948565 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.948783 kubelet[3044]: E0625 14:17:17.948755 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.949275 kubelet[3044]: E0625 14:17:17.949247 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.949462 kubelet[3044]: W0625 14:17:17.949435 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.951087 kubelet[3044]: E0625 14:17:17.950845 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.951561 kubelet[3044]: E0625 14:17:17.951529 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.951946 kubelet[3044]: W0625 14:17:17.951792 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.953077 kubelet[3044]: E0625 14:17:17.952648 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.953419 kubelet[3044]: E0625 14:17:17.953388 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.953594 kubelet[3044]: W0625 14:17:17.953566 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.954192 kubelet[3044]: E0625 14:17:17.954160 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.954368 kubelet[3044]: W0625 14:17:17.954341 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.954550 kubelet[3044]: E0625 14:17:17.954501 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.954680 kubelet[3044]: E0625 14:17:17.954565 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.955248 kubelet[3044]: E0625 14:17:17.955218 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.955435 kubelet[3044]: W0625 14:17:17.955411 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.955562 kubelet[3044]: E0625 14:17:17.955536 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.956320 kubelet[3044]: E0625 14:17:17.956274 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.956590 kubelet[3044]: W0625 14:17:17.956549 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.956830 kubelet[3044]: E0625 14:17:17.956799 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.957391 kubelet[3044]: E0625 14:17:17.957361 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.958984 kubelet[3044]: W0625 14:17:17.957496 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.958984 kubelet[3044]: E0625 14:17:17.957530 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.962935 kubelet[3044]: E0625 14:17:17.962886 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.962935 kubelet[3044]: W0625 14:17:17.962925 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.963139 kubelet[3044]: E0625 14:17:17.962969 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.964977 kubelet[3044]: E0625 14:17:17.964001 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.964977 kubelet[3044]: W0625 14:17:17.964062 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.965280 kubelet[3044]: E0625 14:17:17.965246 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.965954 kubelet[3044]: E0625 14:17:17.965907 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.966073 kubelet[3044]: W0625 14:17:17.965968 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.966157 kubelet[3044]: E0625 14:17:17.966066 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.969508 kubelet[3044]: E0625 14:17:17.969443 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.969508 kubelet[3044]: W0625 14:17:17.969482 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.969825 kubelet[3044]: E0625 14:17:17.969619 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.970481 kubelet[3044]: E0625 14:17:17.970436 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.970609 kubelet[3044]: W0625 14:17:17.970496 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.970609 kubelet[3044]: E0625 14:17:17.970594 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.972249 kubelet[3044]: E0625 14:17:17.972200 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.972249 kubelet[3044]: W0625 14:17:17.972235 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.972490 kubelet[3044]: E0625 14:17:17.972275 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.972808 kubelet[3044]: E0625 14:17:17.972774 3044 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:17:17.972808 kubelet[3044]: W0625 14:17:17.972802 3044 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:17:17.972968 kubelet[3044]: E0625 14:17:17.972831 3044 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:17:17.980069 kubelet[3044]: I0625 14:17:17.979985 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-d74cf677b-bpb5t" podStartSLOduration=2.035777655 podStartE2EDuration="4.979960784s" podCreationTimestamp="2024-06-25 14:17:13 +0000 UTC" firstStartedPulling="2024-06-25 14:17:14.0101618 +0000 UTC m=+24.548337410" lastFinishedPulling="2024-06-25 14:17:16.954344929 +0000 UTC m=+27.492520539" observedRunningTime="2024-06-25 14:17:17.949204898 +0000 UTC m=+28.487380532" watchObservedRunningTime="2024-06-25 14:17:17.979960784 +0000 UTC m=+28.518136406" Jun 25 14:17:18.019000 audit[3621]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=3621 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:18.019000 audit[3621]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffdc2e7dd0 a2=0 a3=1 items=0 ppid=3185 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:18.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:18.021000 audit[3621]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3621 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:18.021000 audit[3621]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffdc2e7dd0 a2=0 a3=1 items=0 ppid=3185 pid=3621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:18.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:18.372943 containerd[1804]: time="2024-06-25T14:17:18.372888178Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:18.374997 containerd[1804]: time="2024-06-25T14:17:18.374946292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jun 25 14:17:18.375852 containerd[1804]: time="2024-06-25T14:17:18.375810487Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:18.379145 containerd[1804]: time="2024-06-25T14:17:18.379059484Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:18.382892 containerd[1804]: time="2024-06-25T14:17:18.382828599Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:18.387883 containerd[1804]: time="2024-06-25T14:17:18.387804989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.418186126s" Jun 25 14:17:18.387883 containerd[1804]: time="2024-06-25T14:17:18.387877554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jun 25 14:17:18.397333 containerd[1804]: time="2024-06-25T14:17:18.397274193Z" level=info msg="CreateContainer within sandbox \"db8952229c6730f01a983f0b3c23a131d18d48f592f23b82cb632351ab0e01fb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 14:17:18.426619 containerd[1804]: time="2024-06-25T14:17:18.426560181Z" level=info msg="CreateContainer within sandbox \"db8952229c6730f01a983f0b3c23a131d18d48f592f23b82cb632351ab0e01fb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6a9cd15c7c03cafa3f4333cc898ec96f1ed5f5c12d9760fad362999f8de18d18\"" Jun 25 14:17:18.427983 containerd[1804]: time="2024-06-25T14:17:18.427931233Z" level=info msg="StartContainer for \"6a9cd15c7c03cafa3f4333cc898ec96f1ed5f5c12d9760fad362999f8de18d18\"" Jun 25 14:17:18.528066 systemd[1]: Started cri-containerd-6a9cd15c7c03cafa3f4333cc898ec96f1ed5f5c12d9760fad362999f8de18d18.scope - libcontainer container 6a9cd15c7c03cafa3f4333cc898ec96f1ed5f5c12d9760fad362999f8de18d18. Jun 25 14:17:18.576000 audit: BPF prog-id=132 op=LOAD Jun 25 14:17:18.576000 audit[3635]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=3502 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:18.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661396364313563376330336361666133663433333363633839386563 Jun 25 14:17:18.577000 audit: BPF prog-id=133 op=LOAD Jun 25 14:17:18.577000 audit[3635]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=3502 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:18.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661396364313563376330336361666133663433333363633839386563 Jun 25 14:17:18.577000 audit: BPF prog-id=133 op=UNLOAD Jun 25 14:17:18.577000 audit: BPF prog-id=132 op=UNLOAD Jun 25 14:17:18.577000 audit: BPF prog-id=134 op=LOAD Jun 25 14:17:18.577000 audit[3635]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=3502 pid=3635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:18.577000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661396364313563376330336361666133663433333363633839386563 Jun 25 14:17:18.623355 containerd[1804]: time="2024-06-25T14:17:18.623208924Z" level=info msg="StartContainer for \"6a9cd15c7c03cafa3f4333cc898ec96f1ed5f5c12d9760fad362999f8de18d18\" returns successfully" Jun 25 14:17:18.737414 systemd[1]: cri-containerd-6a9cd15c7c03cafa3f4333cc898ec96f1ed5f5c12d9760fad362999f8de18d18.scope: Deactivated successfully. Jun 25 14:17:18.740000 audit: BPF prog-id=134 op=UNLOAD Jun 25 14:17:18.777579 kubelet[3044]: E0625 14:17:18.777500 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6sfl8" podUID="f0eb91a0-b29d-4d99-bc83-5df8975b23bb" Jun 25 14:17:18.963091 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6a9cd15c7c03cafa3f4333cc898ec96f1ed5f5c12d9760fad362999f8de18d18-rootfs.mount: Deactivated successfully. Jun 25 14:17:19.128582 containerd[1804]: time="2024-06-25T14:17:19.128497974Z" level=info msg="shim disconnected" id=6a9cd15c7c03cafa3f4333cc898ec96f1ed5f5c12d9760fad362999f8de18d18 namespace=k8s.io Jun 25 14:17:19.128582 containerd[1804]: time="2024-06-25T14:17:19.128571054Z" level=warning msg="cleaning up after shim disconnected" id=6a9cd15c7c03cafa3f4333cc898ec96f1ed5f5c12d9760fad362999f8de18d18 namespace=k8s.io Jun 25 14:17:19.128920 containerd[1804]: time="2024-06-25T14:17:19.128597346Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:17:19.934251 containerd[1804]: time="2024-06-25T14:17:19.933633709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 14:17:20.777202 kubelet[3044]: E0625 14:17:20.777119 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6sfl8" podUID="f0eb91a0-b29d-4d99-bc83-5df8975b23bb" Jun 25 14:17:22.778494 kubelet[3044]: E0625 14:17:22.777950 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6sfl8" podUID="f0eb91a0-b29d-4d99-bc83-5df8975b23bb" Jun 25 14:17:23.976110 containerd[1804]: time="2024-06-25T14:17:23.976031665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:23.977780 containerd[1804]: time="2024-06-25T14:17:23.977709882Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jun 25 14:17:23.979706 containerd[1804]: time="2024-06-25T14:17:23.979620563Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:23.983445 containerd[1804]: time="2024-06-25T14:17:23.983396985Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:23.986511 containerd[1804]: time="2024-06-25T14:17:23.986460305Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:23.988464 containerd[1804]: time="2024-06-25T14:17:23.988396978Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 4.054646281s" Jun 25 14:17:23.988763 containerd[1804]: time="2024-06-25T14:17:23.988630871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jun 25 14:17:23.993803 containerd[1804]: time="2024-06-25T14:17:23.993634500Z" level=info msg="CreateContainer within sandbox \"db8952229c6730f01a983f0b3c23a131d18d48f592f23b82cb632351ab0e01fb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 14:17:24.021485 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2903256428.mount: Deactivated successfully. Jun 25 14:17:24.031583 containerd[1804]: time="2024-06-25T14:17:24.031501582Z" level=info msg="CreateContainer within sandbox \"db8952229c6730f01a983f0b3c23a131d18d48f592f23b82cb632351ab0e01fb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3aeaf3d675377c24686ef1f017247e2d17a5cfe87642829b1b6590a9b543cee1\"" Jun 25 14:17:24.034594 containerd[1804]: time="2024-06-25T14:17:24.032959766Z" level=info msg="StartContainer for \"3aeaf3d675377c24686ef1f017247e2d17a5cfe87642829b1b6590a9b543cee1\"" Jun 25 14:17:24.092999 systemd[1]: Started cri-containerd-3aeaf3d675377c24686ef1f017247e2d17a5cfe87642829b1b6590a9b543cee1.scope - libcontainer container 3aeaf3d675377c24686ef1f017247e2d17a5cfe87642829b1b6590a9b543cee1. Jun 25 14:17:24.127358 kernel: kauditd_printk_skb: 50 callbacks suppressed Jun 25 14:17:24.127541 kernel: audit: type=1334 audit(1719325044.123:505): prog-id=135 op=LOAD Jun 25 14:17:24.123000 audit: BPF prog-id=135 op=LOAD Jun 25 14:17:24.123000 audit[3707]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=3502 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:24.132700 kernel: audit: type=1300 audit(1719325044.123:505): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=3502 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:24.132831 kernel: audit: type=1327 audit(1719325044.123:505): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361656166336436373533373763323436383665663166303137323437 Jun 25 14:17:24.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361656166336436373533373763323436383665663166303137323437 Jun 25 14:17:24.123000 audit: BPF prog-id=136 op=LOAD Jun 25 14:17:24.139800 kernel: audit: type=1334 audit(1719325044.123:506): prog-id=136 op=LOAD Jun 25 14:17:24.139923 kernel: audit: type=1300 audit(1719325044.123:506): arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=3502 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:24.123000 audit[3707]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=3502 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:24.123000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361656166336436373533373763323436383665663166303137323437 Jun 25 14:17:24.148698 kernel: audit: type=1327 audit(1719325044.123:506): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361656166336436373533373763323436383665663166303137323437 Jun 25 14:17:24.150147 kernel: audit: type=1334 audit(1719325044.125:507): prog-id=136 op=UNLOAD Jun 25 14:17:24.125000 audit: BPF prog-id=136 op=UNLOAD Jun 25 14:17:24.125000 audit: BPF prog-id=135 op=UNLOAD Jun 25 14:17:24.125000 audit: BPF prog-id=137 op=LOAD Jun 25 14:17:24.158311 kernel: audit: type=1334 audit(1719325044.125:508): prog-id=135 op=UNLOAD Jun 25 14:17:24.158429 kernel: audit: type=1334 audit(1719325044.125:509): prog-id=137 op=LOAD Jun 25 14:17:24.158485 kernel: audit: type=1300 audit(1719325044.125:509): arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=3502 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:24.125000 audit[3707]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=3502 pid=3707 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:24.125000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3361656166336436373533373763323436383665663166303137323437 Jun 25 14:17:24.172485 containerd[1804]: time="2024-06-25T14:17:24.172377399Z" level=info msg="StartContainer for \"3aeaf3d675377c24686ef1f017247e2d17a5cfe87642829b1b6590a9b543cee1\" returns successfully" Jun 25 14:17:24.777765 kubelet[3044]: E0625 14:17:24.777651 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-6sfl8" podUID="f0eb91a0-b29d-4d99-bc83-5df8975b23bb" Jun 25 14:17:25.274069 containerd[1804]: time="2024-06-25T14:17:25.273991184Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 14:17:25.279012 systemd[1]: cri-containerd-3aeaf3d675377c24686ef1f017247e2d17a5cfe87642829b1b6590a9b543cee1.scope: Deactivated successfully. Jun 25 14:17:25.281000 audit: BPF prog-id=137 op=UNLOAD Jun 25 14:17:25.318295 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3aeaf3d675377c24686ef1f017247e2d17a5cfe87642829b1b6590a9b543cee1-rootfs.mount: Deactivated successfully. Jun 25 14:17:25.327468 kubelet[3044]: I0625 14:17:25.327424 3044 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jun 25 14:17:25.358813 kubelet[3044]: I0625 14:17:25.358760 3044 topology_manager.go:215] "Topology Admit Handler" podUID="394daa7f-b0b9-4121-9800-b5db47ff9611" podNamespace="kube-system" podName="coredns-7db6d8ff4d-nf7mv" Jun 25 14:17:25.363037 kubelet[3044]: I0625 14:17:25.362975 3044 topology_manager.go:215] "Topology Admit Handler" podUID="7a3e59ff-5767-4d1a-8966-6d8e2be16aa6" podNamespace="kube-system" podName="coredns-7db6d8ff4d-sxhnd" Jun 25 14:17:25.377755 kubelet[3044]: I0625 14:17:25.376388 3044 topology_manager.go:215] "Topology Admit Handler" podUID="766af770-6e30-4ef9-b8c4-4b069a4bd63d" podNamespace="calico-system" podName="calico-kube-controllers-67f64868cd-fnncv" Jun 25 14:17:25.377423 systemd[1]: Created slice kubepods-burstable-pod394daa7f_b0b9_4121_9800_b5db47ff9611.slice - libcontainer container kubepods-burstable-pod394daa7f_b0b9_4121_9800_b5db47ff9611.slice. Jun 25 14:17:25.395854 systemd[1]: Created slice kubepods-burstable-pod7a3e59ff_5767_4d1a_8966_6d8e2be16aa6.slice - libcontainer container kubepods-burstable-pod7a3e59ff_5767_4d1a_8966_6d8e2be16aa6.slice. Jun 25 14:17:25.402394 systemd[1]: Created slice kubepods-besteffort-pod766af770_6e30_4ef9_b8c4_4b069a4bd63d.slice - libcontainer container kubepods-besteffort-pod766af770_6e30_4ef9_b8c4_4b069a4bd63d.slice. Jun 25 14:17:25.501895 kubelet[3044]: I0625 14:17:25.501834 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/766af770-6e30-4ef9-b8c4-4b069a4bd63d-tigera-ca-bundle\") pod \"calico-kube-controllers-67f64868cd-fnncv\" (UID: \"766af770-6e30-4ef9-b8c4-4b069a4bd63d\") " pod="calico-system/calico-kube-controllers-67f64868cd-fnncv" Jun 25 14:17:25.502352 kubelet[3044]: I0625 14:17:25.502287 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlp2c\" (UniqueName: \"kubernetes.io/projected/766af770-6e30-4ef9-b8c4-4b069a4bd63d-kube-api-access-tlp2c\") pod \"calico-kube-controllers-67f64868cd-fnncv\" (UID: \"766af770-6e30-4ef9-b8c4-4b069a4bd63d\") " pod="calico-system/calico-kube-controllers-67f64868cd-fnncv" Jun 25 14:17:25.503869 kubelet[3044]: I0625 14:17:25.502694 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/394daa7f-b0b9-4121-9800-b5db47ff9611-config-volume\") pod \"coredns-7db6d8ff4d-nf7mv\" (UID: \"394daa7f-b0b9-4121-9800-b5db47ff9611\") " pod="kube-system/coredns-7db6d8ff4d-nf7mv" Jun 25 14:17:25.503869 kubelet[3044]: I0625 14:17:25.502788 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs8r9\" (UniqueName: \"kubernetes.io/projected/7a3e59ff-5767-4d1a-8966-6d8e2be16aa6-kube-api-access-gs8r9\") pod \"coredns-7db6d8ff4d-sxhnd\" (UID: \"7a3e59ff-5767-4d1a-8966-6d8e2be16aa6\") " pod="kube-system/coredns-7db6d8ff4d-sxhnd" Jun 25 14:17:25.503869 kubelet[3044]: I0625 14:17:25.502833 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7r4s\" (UniqueName: \"kubernetes.io/projected/394daa7f-b0b9-4121-9800-b5db47ff9611-kube-api-access-z7r4s\") pod \"coredns-7db6d8ff4d-nf7mv\" (UID: \"394daa7f-b0b9-4121-9800-b5db47ff9611\") " pod="kube-system/coredns-7db6d8ff4d-nf7mv" Jun 25 14:17:25.503869 kubelet[3044]: I0625 14:17:25.502912 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a3e59ff-5767-4d1a-8966-6d8e2be16aa6-config-volume\") pod \"coredns-7db6d8ff4d-sxhnd\" (UID: \"7a3e59ff-5767-4d1a-8966-6d8e2be16aa6\") " pod="kube-system/coredns-7db6d8ff4d-sxhnd" Jun 25 14:17:25.687812 containerd[1804]: time="2024-06-25T14:17:25.687636763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nf7mv,Uid:394daa7f-b0b9-4121-9800-b5db47ff9611,Namespace:kube-system,Attempt:0,}" Jun 25 14:17:25.710774 containerd[1804]: time="2024-06-25T14:17:25.710004004Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f64868cd-fnncv,Uid:766af770-6e30-4ef9-b8c4-4b069a4bd63d,Namespace:calico-system,Attempt:0,}" Jun 25 14:17:25.712720 containerd[1804]: time="2024-06-25T14:17:25.712612967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sxhnd,Uid:7a3e59ff-5767-4d1a-8966-6d8e2be16aa6,Namespace:kube-system,Attempt:0,}" Jun 25 14:17:26.458453 containerd[1804]: time="2024-06-25T14:17:26.458356508Z" level=info msg="shim disconnected" id=3aeaf3d675377c24686ef1f017247e2d17a5cfe87642829b1b6590a9b543cee1 namespace=k8s.io Jun 25 14:17:26.458453 containerd[1804]: time="2024-06-25T14:17:26.458438636Z" level=warning msg="cleaning up after shim disconnected" id=3aeaf3d675377c24686ef1f017247e2d17a5cfe87642829b1b6590a9b543cee1 namespace=k8s.io Jun 25 14:17:26.458453 containerd[1804]: time="2024-06-25T14:17:26.458461532Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:17:26.585296 containerd[1804]: time="2024-06-25T14:17:26.585202227Z" level=error msg="Failed to destroy network for sandbox \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.589387 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e-shm.mount: Deactivated successfully. Jun 25 14:17:26.593386 containerd[1804]: time="2024-06-25T14:17:26.593310983Z" level=error msg="encountered an error cleaning up failed sandbox \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.593827 containerd[1804]: time="2024-06-25T14:17:26.593759088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nf7mv,Uid:394daa7f-b0b9-4121-9800-b5db47ff9611,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.594367 kubelet[3044]: E0625 14:17:26.594297 3044 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.597430 kubelet[3044]: E0625 14:17:26.594398 3044 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nf7mv" Jun 25 14:17:26.597430 kubelet[3044]: E0625 14:17:26.594450 3044 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-nf7mv" Jun 25 14:17:26.597430 kubelet[3044]: E0625 14:17:26.594531 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-nf7mv_kube-system(394daa7f-b0b9-4121-9800-b5db47ff9611)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-nf7mv_kube-system(394daa7f-b0b9-4121-9800-b5db47ff9611)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nf7mv" podUID="394daa7f-b0b9-4121-9800-b5db47ff9611" Jun 25 14:17:26.623818 containerd[1804]: time="2024-06-25T14:17:26.623731227Z" level=error msg="Failed to destroy network for sandbox \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.630024 containerd[1804]: time="2024-06-25T14:17:26.624430829Z" level=error msg="encountered an error cleaning up failed sandbox \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.630024 containerd[1804]: time="2024-06-25T14:17:26.624522965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f64868cd-fnncv,Uid:766af770-6e30-4ef9-b8c4-4b069a4bd63d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.627639 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2-shm.mount: Deactivated successfully. Jun 25 14:17:26.634276 kubelet[3044]: E0625 14:17:26.631309 3044 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.634276 kubelet[3044]: E0625 14:17:26.631409 3044 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67f64868cd-fnncv" Jun 25 14:17:26.634276 kubelet[3044]: E0625 14:17:26.631443 3044 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-67f64868cd-fnncv" Jun 25 14:17:26.635761 kubelet[3044]: E0625 14:17:26.631567 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-67f64868cd-fnncv_calico-system(766af770-6e30-4ef9-b8c4-4b069a4bd63d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-67f64868cd-fnncv_calico-system(766af770-6e30-4ef9-b8c4-4b069a4bd63d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67f64868cd-fnncv" podUID="766af770-6e30-4ef9-b8c4-4b069a4bd63d" Jun 25 14:17:26.639976 containerd[1804]: time="2024-06-25T14:17:26.639895892Z" level=error msg="Failed to destroy network for sandbox \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.640641 containerd[1804]: time="2024-06-25T14:17:26.640576822Z" level=error msg="encountered an error cleaning up failed sandbox \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.640813 containerd[1804]: time="2024-06-25T14:17:26.640703086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sxhnd,Uid:7a3e59ff-5767-4d1a-8966-6d8e2be16aa6,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.641081 kubelet[3044]: E0625 14:17:26.641007 3044 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.641218 kubelet[3044]: E0625 14:17:26.641095 3044 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-sxhnd" Jun 25 14:17:26.641218 kubelet[3044]: E0625 14:17:26.641129 3044 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-sxhnd" Jun 25 14:17:26.642130 kubelet[3044]: E0625 14:17:26.641210 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-sxhnd_kube-system(7a3e59ff-5767-4d1a-8966-6d8e2be16aa6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-sxhnd_kube-system(7a3e59ff-5767-4d1a-8966-6d8e2be16aa6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-sxhnd" podUID="7a3e59ff-5767-4d1a-8966-6d8e2be16aa6" Jun 25 14:17:26.787034 systemd[1]: Created slice kubepods-besteffort-podf0eb91a0_b29d_4d99_bc83_5df8975b23bb.slice - libcontainer container kubepods-besteffort-podf0eb91a0_b29d_4d99_bc83_5df8975b23bb.slice. Jun 25 14:17:26.793113 containerd[1804]: time="2024-06-25T14:17:26.793058733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6sfl8,Uid:f0eb91a0-b29d-4d99-bc83-5df8975b23bb,Namespace:calico-system,Attempt:0,}" Jun 25 14:17:26.891490 containerd[1804]: time="2024-06-25T14:17:26.891417136Z" level=error msg="Failed to destroy network for sandbox \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.892376 containerd[1804]: time="2024-06-25T14:17:26.892320691Z" level=error msg="encountered an error cleaning up failed sandbox \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.892591 containerd[1804]: time="2024-06-25T14:17:26.892543531Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6sfl8,Uid:f0eb91a0-b29d-4d99-bc83-5df8975b23bb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.893043 kubelet[3044]: E0625 14:17:26.892973 3044 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:26.893176 kubelet[3044]: E0625 14:17:26.893062 3044 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6sfl8" Jun 25 14:17:26.893176 kubelet[3044]: E0625 14:17:26.893096 3044 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-6sfl8" Jun 25 14:17:26.894571 kubelet[3044]: E0625 14:17:26.893172 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-6sfl8_calico-system(f0eb91a0-b29d-4d99-bc83-5df8975b23bb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-6sfl8_calico-system(f0eb91a0-b29d-4d99-bc83-5df8975b23bb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6sfl8" podUID="f0eb91a0-b29d-4d99-bc83-5df8975b23bb" Jun 25 14:17:26.954054 kubelet[3044]: I0625 14:17:26.953398 3044 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:26.957356 containerd[1804]: time="2024-06-25T14:17:26.957279194Z" level=info msg="StopPodSandbox for \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\"" Jun 25 14:17:26.958850 containerd[1804]: time="2024-06-25T14:17:26.958744350Z" level=info msg="Ensure that sandbox 398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e in task-service has been cleanup successfully" Jun 25 14:17:26.969714 containerd[1804]: time="2024-06-25T14:17:26.967201779Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 14:17:26.969923 kubelet[3044]: I0625 14:17:26.968581 3044 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:26.970193 containerd[1804]: time="2024-06-25T14:17:26.970148434Z" level=info msg="StopPodSandbox for \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\"" Jun 25 14:17:26.970650 containerd[1804]: time="2024-06-25T14:17:26.970611851Z" level=info msg="Ensure that sandbox 4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7 in task-service has been cleanup successfully" Jun 25 14:17:26.977566 kubelet[3044]: I0625 14:17:26.973499 3044 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:26.977757 containerd[1804]: time="2024-06-25T14:17:26.975342959Z" level=info msg="StopPodSandbox for \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\"" Jun 25 14:17:26.977757 containerd[1804]: time="2024-06-25T14:17:26.975706644Z" level=info msg="Ensure that sandbox e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce in task-service has been cleanup successfully" Jun 25 14:17:26.991554 kubelet[3044]: I0625 14:17:26.990916 3044 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:26.992722 containerd[1804]: time="2024-06-25T14:17:26.992639503Z" level=info msg="StopPodSandbox for \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\"" Jun 25 14:17:26.993261 containerd[1804]: time="2024-06-25T14:17:26.993221744Z" level=info msg="Ensure that sandbox 3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2 in task-service has been cleanup successfully" Jun 25 14:17:27.103326 containerd[1804]: time="2024-06-25T14:17:27.103231837Z" level=error msg="StopPodSandbox for \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\" failed" error="failed to destroy network for sandbox \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:27.103705 kubelet[3044]: E0625 14:17:27.103593 3044 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:27.103836 kubelet[3044]: E0625 14:17:27.103732 3044 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7"} Jun 25 14:17:27.103935 kubelet[3044]: E0625 14:17:27.103890 3044 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7a3e59ff-5767-4d1a-8966-6d8e2be16aa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:17:27.104055 kubelet[3044]: E0625 14:17:27.103963 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7a3e59ff-5767-4d1a-8966-6d8e2be16aa6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-sxhnd" podUID="7a3e59ff-5767-4d1a-8966-6d8e2be16aa6" Jun 25 14:17:27.110927 containerd[1804]: time="2024-06-25T14:17:27.110850956Z" level=error msg="StopPodSandbox for \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\" failed" error="failed to destroy network for sandbox \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:27.111522 kubelet[3044]: E0625 14:17:27.111436 3044 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:27.111734 kubelet[3044]: E0625 14:17:27.111531 3044 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e"} Jun 25 14:17:27.111734 kubelet[3044]: E0625 14:17:27.111611 3044 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"394daa7f-b0b9-4121-9800-b5db47ff9611\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:17:27.111944 kubelet[3044]: E0625 14:17:27.111721 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"394daa7f-b0b9-4121-9800-b5db47ff9611\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-nf7mv" podUID="394daa7f-b0b9-4121-9800-b5db47ff9611" Jun 25 14:17:27.117845 containerd[1804]: time="2024-06-25T14:17:27.117749617Z" level=error msg="StopPodSandbox for \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\" failed" error="failed to destroy network for sandbox \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:27.125375 kubelet[3044]: E0625 14:17:27.125289 3044 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:27.125572 kubelet[3044]: E0625 14:17:27.125393 3044 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce"} Jun 25 14:17:27.125823 kubelet[3044]: E0625 14:17:27.125752 3044 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f0eb91a0-b29d-4d99-bc83-5df8975b23bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:17:27.126252 kubelet[3044]: E0625 14:17:27.125867 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f0eb91a0-b29d-4d99-bc83-5df8975b23bb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-6sfl8" podUID="f0eb91a0-b29d-4d99-bc83-5df8975b23bb" Jun 25 14:17:27.149292 containerd[1804]: time="2024-06-25T14:17:27.148297961Z" level=error msg="StopPodSandbox for \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\" failed" error="failed to destroy network for sandbox \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:17:27.149481 kubelet[3044]: E0625 14:17:27.148853 3044 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:27.149481 kubelet[3044]: E0625 14:17:27.148952 3044 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2"} Jun 25 14:17:27.149481 kubelet[3044]: E0625 14:17:27.149042 3044 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"766af770-6e30-4ef9-b8c4-4b069a4bd63d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:17:27.149481 kubelet[3044]: E0625 14:17:27.149119 3044 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"766af770-6e30-4ef9-b8c4-4b069a4bd63d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-67f64868cd-fnncv" podUID="766af770-6e30-4ef9-b8c4-4b069a4bd63d" Jun 25 14:17:27.319240 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7-shm.mount: Deactivated successfully. Jun 25 14:17:33.168807 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 14:17:33.169025 kernel: audit: type=1130 audit(1719325053.162:511): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.16.245:22-139.178.68.195:59596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:33.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.16.245:22-139.178.68.195:59596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:33.163445 systemd[1]: Started sshd@7-172.31.16.245:22-139.178.68.195:59596.service - OpenSSH per-connection server daemon (139.178.68.195:59596). Jun 25 14:17:33.367000 audit[3979]: USER_ACCT pid=3979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:33.374282 sshd[3979]: Accepted publickey for core from 139.178.68.195 port 59596 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:17:33.374770 kernel: audit: type=1101 audit(1719325053.367:512): pid=3979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:33.384691 kernel: audit: type=1103 audit(1719325053.375:513): pid=3979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:33.385211 kernel: audit: type=1006 audit(1719325053.375:514): pid=3979 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jun 25 14:17:33.375000 audit[3979]: CRED_ACQ pid=3979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:33.378835 sshd[3979]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:17:33.375000 audit[3979]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdf363d40 a2=3 a3=1 items=0 ppid=1 pid=3979 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:33.390479 kernel: audit: type=1300 audit(1719325053.375:514): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdf363d40 a2=3 a3=1 items=0 ppid=1 pid=3979 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:33.394263 systemd-logind[1794]: New session 8 of user core. Jun 25 14:17:33.375000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:17:33.398879 kernel: audit: type=1327 audit(1719325053.375:514): proctitle=737368643A20636F7265205B707269765D Jun 25 14:17:33.400997 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 14:17:33.409000 audit[3979]: USER_START pid=3979 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:33.419719 kernel: audit: type=1105 audit(1719325053.409:515): pid=3979 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:33.417000 audit[3981]: CRED_ACQ pid=3981 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:33.425755 kernel: audit: type=1103 audit(1719325053.417:516): pid=3981 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:33.778826 sshd[3979]: pam_unix(sshd:session): session closed for user core Jun 25 14:17:33.781000 audit[3979]: USER_END pid=3979 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:33.789168 systemd[1]: sshd@7-172.31.16.245:22-139.178.68.195:59596.service: Deactivated successfully. Jun 25 14:17:33.781000 audit[3979]: CRED_DISP pid=3979 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:33.790541 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 14:17:33.795893 kernel: audit: type=1106 audit(1719325053.781:517): pid=3979 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:33.798205 kernel: audit: type=1104 audit(1719325053.781:518): pid=3979 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:33.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.16.245:22-139.178.68.195:59596 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:33.797809 systemd-logind[1794]: Session 8 logged out. Waiting for processes to exit. Jun 25 14:17:33.800268 systemd-logind[1794]: Removed session 8. Jun 25 14:17:33.919261 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2819403770.mount: Deactivated successfully. Jun 25 14:17:34.002853 containerd[1804]: time="2024-06-25T14:17:34.002793992Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:34.004923 containerd[1804]: time="2024-06-25T14:17:34.004859629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jun 25 14:17:34.007847 containerd[1804]: time="2024-06-25T14:17:34.007778299Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:34.012002 containerd[1804]: time="2024-06-25T14:17:34.011946965Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:34.015966 containerd[1804]: time="2024-06-25T14:17:34.015900470Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:34.017633 containerd[1804]: time="2024-06-25T14:17:34.017576178Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 7.050296423s" Jun 25 14:17:34.017847 containerd[1804]: time="2024-06-25T14:17:34.017809122Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jun 25 14:17:34.049185 containerd[1804]: time="2024-06-25T14:17:34.048175360Z" level=info msg="CreateContainer within sandbox \"db8952229c6730f01a983f0b3c23a131d18d48f592f23b82cb632351ab0e01fb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 14:17:34.130850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount316244371.mount: Deactivated successfully. Jun 25 14:17:34.140479 containerd[1804]: time="2024-06-25T14:17:34.140387544Z" level=info msg="CreateContainer within sandbox \"db8952229c6730f01a983f0b3c23a131d18d48f592f23b82cb632351ab0e01fb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6df28bc0795455b3ee72364f0effa6f09696aa8853d64e61d13f40e256785701\"" Jun 25 14:17:34.143808 containerd[1804]: time="2024-06-25T14:17:34.141390626Z" level=info msg="StartContainer for \"6df28bc0795455b3ee72364f0effa6f09696aa8853d64e61d13f40e256785701\"" Jun 25 14:17:34.181986 systemd[1]: Started cri-containerd-6df28bc0795455b3ee72364f0effa6f09696aa8853d64e61d13f40e256785701.scope - libcontainer container 6df28bc0795455b3ee72364f0effa6f09696aa8853d64e61d13f40e256785701. Jun 25 14:17:34.213000 audit: BPF prog-id=138 op=LOAD Jun 25 14:17:34.213000 audit[4006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=3502 pid=4006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:34.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664663238626330373935343535623365653732333634663065666661 Jun 25 14:17:34.214000 audit: BPF prog-id=139 op=LOAD Jun 25 14:17:34.214000 audit[4006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=3502 pid=4006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:34.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664663238626330373935343535623365653732333634663065666661 Jun 25 14:17:34.215000 audit: BPF prog-id=139 op=UNLOAD Jun 25 14:17:34.215000 audit: BPF prog-id=138 op=UNLOAD Jun 25 14:17:34.215000 audit: BPF prog-id=140 op=LOAD Jun 25 14:17:34.215000 audit[4006]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=3502 pid=4006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:34.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3664663238626330373935343535623365653732333634663065666661 Jun 25 14:17:34.252815 containerd[1804]: time="2024-06-25T14:17:34.252751145Z" level=info msg="StartContainer for \"6df28bc0795455b3ee72364f0effa6f09696aa8853d64e61d13f40e256785701\" returns successfully" Jun 25 14:17:34.429777 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 14:17:34.429986 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 14:17:36.174000 audit[4133]: AVC avc: denied { write } for pid=4133 comm="tee" name="fd" dev="proc" ino=22164 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:17:36.174000 audit[4133]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffbf50a0d a2=241 a3=1b6 items=1 ppid=4085 pid=4133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:36.174000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 14:17:36.174000 audit: PATH item=0 name="/dev/fd/63" inode=22551 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:17:36.174000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:17:36.177000 audit[4119]: AVC avc: denied { write } for pid=4119 comm="tee" name="fd" dev="proc" ino=22554 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:17:36.177000 audit[4119]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc9686a1e a2=241 a3=1b6 items=1 ppid=4079 pid=4119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:36.177000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 14:17:36.177000 audit: PATH item=0 name="/dev/fd/63" inode=22537 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:17:36.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:17:36.188000 audit[4113]: AVC avc: denied { write } for pid=4113 comm="tee" name="fd" dev="proc" ino=22168 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:17:36.188000 audit[4113]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd7afea1c a2=241 a3=1b6 items=1 ppid=4075 pid=4113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:36.188000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 14:17:36.188000 audit: PATH item=0 name="/dev/fd/63" inode=22536 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:17:36.188000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:17:36.218000 audit[4126]: AVC avc: denied { write } for pid=4126 comm="tee" name="fd" dev="proc" ino=22172 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:17:36.218000 audit[4126]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffddecca0c a2=241 a3=1b6 items=1 ppid=4089 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:36.218000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 14:17:36.218000 audit: PATH item=0 name="/dev/fd/63" inode=22549 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:17:36.218000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:17:36.224000 audit[4124]: AVC avc: denied { write } for pid=4124 comm="tee" name="fd" dev="proc" ino=22178 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:17:36.224000 audit[4124]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd6894a1c a2=241 a3=1b6 items=1 ppid=4078 pid=4124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:36.224000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 14:17:36.224000 audit: PATH item=0 name="/dev/fd/63" inode=22548 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:17:36.224000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:17:36.228000 audit[4130]: AVC avc: denied { write } for pid=4130 comm="tee" name="fd" dev="proc" ino=22182 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:17:36.228000 audit[4130]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff426ba1c a2=241 a3=1b6 items=1 ppid=4087 pid=4130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:36.228000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 14:17:36.228000 audit: PATH item=0 name="/dev/fd/63" inode=22550 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:17:36.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:17:36.261000 audit[4149]: AVC avc: denied { write } for pid=4149 comm="tee" name="fd" dev="proc" ino=22190 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:17:36.261000 audit[4149]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcc39ba1d a2=241 a3=1b6 items=1 ppid=4081 pid=4149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:36.261000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 14:17:36.261000 audit: PATH item=0 name="/dev/fd/63" inode=22558 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:17:36.261000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:17:36.910263 (udev-worker)[4041]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:17:36.923793 systemd-networkd[1524]: vxlan.calico: Link UP Jun 25 14:17:36.923811 systemd-networkd[1524]: vxlan.calico: Gained carrier Jun 25 14:17:36.955940 (udev-worker)[4040]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:17:36.972000 audit: BPF prog-id=141 op=LOAD Jun 25 14:17:36.972000 audit[4218]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc6e91478 a2=70 a3=ffffc6e914e8 items=0 ppid=4077 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:36.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:17:36.972000 audit: BPF prog-id=141 op=UNLOAD Jun 25 14:17:36.972000 audit: BPF prog-id=142 op=LOAD Jun 25 14:17:36.972000 audit[4218]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc6e91478 a2=70 a3=4b243c items=0 ppid=4077 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:36.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:17:36.972000 audit: BPF prog-id=142 op=UNLOAD Jun 25 14:17:36.972000 audit: BPF prog-id=143 op=LOAD Jun 25 14:17:36.972000 audit[4218]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc6e91418 a2=70 a3=ffffc6e91488 items=0 ppid=4077 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:36.972000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:17:36.972000 audit: BPF prog-id=143 op=UNLOAD Jun 25 14:17:36.973000 audit: BPF prog-id=144 op=LOAD Jun 25 14:17:36.973000 audit[4218]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc6e91448 a2=70 a3=153e7449 items=0 ppid=4077 pid=4218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:36.973000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:17:37.000000 audit: BPF prog-id=144 op=UNLOAD Jun 25 14:17:37.000000 audit[1750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=15 a1=ffffd3299630 a2=4040 a3=1 items=0 ppid=1 pid=1750 auid=4294967295 uid=245 gid=245 euid=245 suid=245 fsuid=245 egid=245 sgid=245 fsgid=245 tty=(none) ses=4294967295 comm="systemd-resolve" exe="/usr/lib/systemd/systemd-resolved" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:37.000000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-resolved" Jun 25 14:17:37.111000 audit[4247]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=4247 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:17:37.111000 audit[4247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffdfdb87c0 a2=0 a3=ffffb9acdfa8 items=0 ppid=4077 pid=4247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:37.111000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:17:37.114000 audit[4245]: NETFILTER_CFG table=raw:98 family=2 entries=19 op=nft_register_chain pid=4245 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:17:37.114000 audit[4245]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6992 a0=3 a1=ffffcb7eaf60 a2=0 a3=ffff8dee3fa8 items=0 ppid=4077 pid=4245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:37.114000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:17:37.129000 audit[4248]: NETFILTER_CFG table=nat:99 family=2 entries=15 op=nft_register_chain pid=4248 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:17:37.129000 audit[4248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=fffffad527d0 a2=0 a3=ffff9f7e7fa8 items=0 ppid=4077 pid=4248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:37.129000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:17:37.133000 audit[4250]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=4250 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:17:37.133000 audit[4250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=fffff7032a40 a2=0 a3=ffff9916dfa8 items=0 ppid=4077 pid=4250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:37.133000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:17:37.781572 containerd[1804]: time="2024-06-25T14:17:37.781486607Z" level=info msg="StopPodSandbox for \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\"" Jun 25 14:17:37.963498 kubelet[3044]: I0625 14:17:37.963376 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-829q5" podStartSLOduration=4.984395516 podStartE2EDuration="24.963330106s" podCreationTimestamp="2024-06-25 14:17:13 +0000 UTC" firstStartedPulling="2024-06-25 14:17:14.040162567 +0000 UTC m=+24.578338177" lastFinishedPulling="2024-06-25 14:17:34.019097169 +0000 UTC m=+44.557272767" observedRunningTime="2024-06-25 14:17:35.036384141 +0000 UTC m=+45.574559787" watchObservedRunningTime="2024-06-25 14:17:37.963330106 +0000 UTC m=+48.501505740" Jun 25 14:17:38.033972 containerd[1804]: 2024-06-25 14:17:37.960 [INFO][4271] k8s.go 608: Cleaning up netns ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:38.033972 containerd[1804]: 2024-06-25 14:17:37.960 [INFO][4271] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" iface="eth0" netns="/var/run/netns/cni-abf753d2-7b61-f674-50fc-f21eccb2e812" Jun 25 14:17:38.033972 containerd[1804]: 2024-06-25 14:17:37.960 [INFO][4271] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" iface="eth0" netns="/var/run/netns/cni-abf753d2-7b61-f674-50fc-f21eccb2e812" Jun 25 14:17:38.033972 containerd[1804]: 2024-06-25 14:17:37.961 [INFO][4271] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" iface="eth0" netns="/var/run/netns/cni-abf753d2-7b61-f674-50fc-f21eccb2e812" Jun 25 14:17:38.033972 containerd[1804]: 2024-06-25 14:17:37.961 [INFO][4271] k8s.go 615: Releasing IP address(es) ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:38.033972 containerd[1804]: 2024-06-25 14:17:37.961 [INFO][4271] utils.go 188: Calico CNI releasing IP address ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:38.033972 containerd[1804]: 2024-06-25 14:17:38.012 [INFO][4277] ipam_plugin.go 411: Releasing address using handleID ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" HandleID="k8s-pod-network.e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Workload="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:38.033972 containerd[1804]: 2024-06-25 14:17:38.013 [INFO][4277] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:38.033972 containerd[1804]: 2024-06-25 14:17:38.013 [INFO][4277] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:38.033972 containerd[1804]: 2024-06-25 14:17:38.025 [WARNING][4277] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" HandleID="k8s-pod-network.e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Workload="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:38.033972 containerd[1804]: 2024-06-25 14:17:38.026 [INFO][4277] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" HandleID="k8s-pod-network.e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Workload="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:38.033972 containerd[1804]: 2024-06-25 14:17:38.028 [INFO][4277] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:38.033972 containerd[1804]: 2024-06-25 14:17:38.030 [INFO][4271] k8s.go 621: Teardown processing complete. ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:38.040770 containerd[1804]: time="2024-06-25T14:17:38.038568245Z" level=info msg="TearDown network for sandbox \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\" successfully" Jun 25 14:17:38.040770 containerd[1804]: time="2024-06-25T14:17:38.038627693Z" level=info msg="StopPodSandbox for \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\" returns successfully" Jun 25 14:17:38.038400 systemd[1]: run-netns-cni\x2dabf753d2\x2d7b61\x2df674\x2d50fc\x2df21eccb2e812.mount: Deactivated successfully. Jun 25 14:17:38.042298 containerd[1804]: time="2024-06-25T14:17:38.042239546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6sfl8,Uid:f0eb91a0-b29d-4d99-bc83-5df8975b23bb,Namespace:calico-system,Attempt:1,}" Jun 25 14:17:38.350176 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:17:38.350336 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali162e1e04848: link becomes ready Jun 25 14:17:38.349150 systemd-networkd[1524]: cali162e1e04848: Link UP Jun 25 14:17:38.350412 systemd-networkd[1524]: cali162e1e04848: Gained carrier Jun 25 14:17:38.352509 (udev-worker)[4224]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.215 [INFO][4286] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0 csi-node-driver- calico-system f0eb91a0-b29d-4d99-bc83-5df8975b23bb 780 0 2024-06-25 14:17:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:6cc9df58f4 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-16-245 csi-node-driver-6sfl8 eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali162e1e04848 [] []}} ContainerID="24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" Namespace="calico-system" Pod="csi-node-driver-6sfl8" WorkloadEndpoint="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-" Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.215 [INFO][4286] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" Namespace="calico-system" Pod="csi-node-driver-6sfl8" WorkloadEndpoint="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.272 [INFO][4297] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" HandleID="k8s-pod-network.24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" Workload="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.291 [INFO][4297] ipam_plugin.go 264: Auto assigning IP ContainerID="24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" HandleID="k8s-pod-network.24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" Workload="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002edbe0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-245", "pod":"csi-node-driver-6sfl8", "timestamp":"2024-06-25 14:17:38.272885236 +0000 UTC"}, Hostname:"ip-172-31-16-245", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.291 [INFO][4297] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.291 [INFO][4297] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.291 [INFO][4297] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-245' Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.294 [INFO][4297] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" host="ip-172-31-16-245" Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.301 [INFO][4297] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-245" Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.309 [INFO][4297] ipam.go 489: Trying affinity for 192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.312 [INFO][4297] ipam.go 155: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.319 [INFO][4297] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.319 [INFO][4297] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" host="ip-172-31-16-245" Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.322 [INFO][4297] ipam.go 1685: Creating new handle: k8s-pod-network.24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1 Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.327 [INFO][4297] ipam.go 1203: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" host="ip-172-31-16-245" Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.335 [INFO][4297] ipam.go 1216: Successfully claimed IPs: [192.168.51.193/26] block=192.168.51.192/26 handle="k8s-pod-network.24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" host="ip-172-31-16-245" Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.335 [INFO][4297] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.51.193/26] handle="k8s-pod-network.24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" host="ip-172-31-16-245" Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.335 [INFO][4297] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:38.377537 containerd[1804]: 2024-06-25 14:17:38.335 [INFO][4297] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.51.193/26] IPv6=[] ContainerID="24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" HandleID="k8s-pod-network.24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" Workload="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:38.378878 containerd[1804]: 2024-06-25 14:17:38.341 [INFO][4286] k8s.go 386: Populated endpoint ContainerID="24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" Namespace="calico-system" Pod="csi-node-driver-6sfl8" WorkloadEndpoint="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0eb91a0-b29d-4d99-bc83-5df8975b23bb", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"", Pod:"csi-node-driver-6sfl8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali162e1e04848", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:38.378878 containerd[1804]: 2024-06-25 14:17:38.341 [INFO][4286] k8s.go 387: Calico CNI using IPs: [192.168.51.193/32] ContainerID="24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" Namespace="calico-system" Pod="csi-node-driver-6sfl8" WorkloadEndpoint="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:38.378878 containerd[1804]: 2024-06-25 14:17:38.341 [INFO][4286] dataplane_linux.go 68: Setting the host side veth name to cali162e1e04848 ContainerID="24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" Namespace="calico-system" Pod="csi-node-driver-6sfl8" WorkloadEndpoint="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:38.378878 containerd[1804]: 2024-06-25 14:17:38.351 [INFO][4286] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" Namespace="calico-system" Pod="csi-node-driver-6sfl8" WorkloadEndpoint="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:38.378878 containerd[1804]: 2024-06-25 14:17:38.354 [INFO][4286] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" Namespace="calico-system" Pod="csi-node-driver-6sfl8" WorkloadEndpoint="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0eb91a0-b29d-4d99-bc83-5df8975b23bb", ResourceVersion:"780", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1", Pod:"csi-node-driver-6sfl8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali162e1e04848", MAC:"32:c0:17:02:ae:33", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:38.378878 containerd[1804]: 2024-06-25 14:17:38.370 [INFO][4286] k8s.go 500: Wrote updated endpoint to datastore ContainerID="24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1" Namespace="calico-system" Pod="csi-node-driver-6sfl8" WorkloadEndpoint="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:38.438001 kernel: kauditd_printk_skb: 77 callbacks suppressed Jun 25 14:17:38.438193 kernel: audit: type=1325 audit(1719325058.430:544): table=filter:101 family=2 entries=34 op=nft_register_chain pid=4326 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:17:38.430000 audit[4326]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=4326 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:17:38.430000 audit[4326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=fffff284fe30 a2=0 a3=ffff86509fa8 items=0 ppid=4077 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:38.443722 kernel: audit: type=1300 audit(1719325058.430:544): arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=fffff284fe30 a2=0 a3=ffff86509fa8 items=0 ppid=4077 pid=4326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:38.430000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:17:38.448177 kernel: audit: type=1327 audit(1719325058.430:544): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:17:38.448326 containerd[1804]: time="2024-06-25T14:17:38.444963720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:17:38.448326 containerd[1804]: time="2024-06-25T14:17:38.445055245Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:38.448326 containerd[1804]: time="2024-06-25T14:17:38.445088029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:17:38.448326 containerd[1804]: time="2024-06-25T14:17:38.445112917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:38.489123 systemd[1]: Started cri-containerd-24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1.scope - libcontainer container 24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1. Jun 25 14:17:38.509000 audit: BPF prog-id=145 op=LOAD Jun 25 14:17:38.511000 audit: BPF prog-id=146 op=LOAD Jun 25 14:17:38.514399 kernel: audit: type=1334 audit(1719325058.509:545): prog-id=145 op=LOAD Jun 25 14:17:38.514463 kernel: audit: type=1334 audit(1719325058.511:546): prog-id=146 op=LOAD Jun 25 14:17:38.514519 kernel: audit: type=1300 audit(1719325058.511:546): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=4325 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:38.511000 audit[4336]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=4325 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:38.519418 kernel: audit: type=1327 audit(1719325058.511:546): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613132376365376638353661663433623636633337356632376539 Jun 25 14:17:38.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613132376365376638353661663433623636633337356632376539 Jun 25 14:17:38.511000 audit: BPF prog-id=147 op=LOAD Jun 25 14:17:38.525640 kernel: audit: type=1334 audit(1719325058.511:547): prog-id=147 op=LOAD Jun 25 14:17:38.525899 kernel: audit: type=1300 audit(1719325058.511:547): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=4325 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:38.511000 audit[4336]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=4325 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:38.529941 kernel: audit: type=1327 audit(1719325058.511:547): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613132376365376638353661663433623636633337356632376539 Jun 25 14:17:38.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613132376365376638353661663433623636633337356632376539 Jun 25 14:17:38.513000 audit: BPF prog-id=147 op=UNLOAD Jun 25 14:17:38.513000 audit: BPF prog-id=146 op=UNLOAD Jun 25 14:17:38.513000 audit: BPF prog-id=148 op=LOAD Jun 25 14:17:38.513000 audit[4336]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a9b10 a2=78 a3=0 items=0 ppid=4325 pid=4336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:38.513000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234613132376365376638353661663433623636633337356632376539 Jun 25 14:17:38.557607 containerd[1804]: time="2024-06-25T14:17:38.557548665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-6sfl8,Uid:f0eb91a0-b29d-4d99-bc83-5df8975b23bb,Namespace:calico-system,Attempt:1,} returns sandbox id \"24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1\"" Jun 25 14:17:38.561032 containerd[1804]: time="2024-06-25T14:17:38.560978435Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 14:17:38.601494 systemd-networkd[1524]: vxlan.calico: Gained IPv6LL Jun 25 14:17:38.827076 systemd[1]: Started sshd@8-172.31.16.245:22-139.178.68.195:39436.service - OpenSSH per-connection server daemon (139.178.68.195:39436). Jun 25 14:17:38.826000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.16.245:22-139.178.68.195:39436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:38.997000 audit[4359]: USER_ACCT pid=4359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:39.001475 sshd[4359]: Accepted publickey for core from 139.178.68.195 port 39436 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:17:39.000000 audit[4359]: CRED_ACQ pid=4359 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:39.000000 audit[4359]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff8a7e800 a2=3 a3=1 items=0 ppid=1 pid=4359 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:39.000000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:17:39.003109 sshd[4359]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:17:39.012907 systemd-logind[1794]: New session 9 of user core. Jun 25 14:17:39.016986 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 14:17:39.029000 audit[4359]: USER_START pid=4359 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:39.031000 audit[4361]: CRED_ACQ pid=4361 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:39.038887 systemd[1]: run-containerd-runc-k8s.io-24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1-runc.Kz11nA.mount: Deactivated successfully. Jun 25 14:17:39.290233 sshd[4359]: pam_unix(sshd:session): session closed for user core Jun 25 14:17:39.291000 audit[4359]: USER_END pid=4359 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:39.292000 audit[4359]: CRED_DISP pid=4359 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:39.297472 systemd-logind[1794]: Session 9 logged out. Waiting for processes to exit. Jun 25 14:17:39.297954 systemd[1]: sshd@8-172.31.16.245:22-139.178.68.195:39436.service: Deactivated successfully. Jun 25 14:17:39.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.16.245:22-139.178.68.195:39436 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:39.299262 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 14:17:39.301393 systemd-logind[1794]: Removed session 9. Jun 25 14:17:39.495946 systemd-networkd[1524]: cali162e1e04848: Gained IPv6LL Jun 25 14:17:39.782329 containerd[1804]: time="2024-06-25T14:17:39.781609038Z" level=info msg="StopPodSandbox for \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\"" Jun 25 14:17:39.784131 containerd[1804]: time="2024-06-25T14:17:39.783096609Z" level=info msg="StopPodSandbox for \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\"" Jun 25 14:17:40.031108 containerd[1804]: time="2024-06-25T14:17:40.031027773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:40.033557 containerd[1804]: time="2024-06-25T14:17:40.033389667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jun 25 14:17:40.043805 containerd[1804]: time="2024-06-25T14:17:40.043612264Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:40.053922 containerd[1804]: time="2024-06-25T14:17:40.053853066Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:40.071560 containerd[1804]: time="2024-06-25T14:17:40.071485316Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:40.075629 containerd[1804]: time="2024-06-25T14:17:40.073453531Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 1.512207194s" Jun 25 14:17:40.075629 containerd[1804]: time="2024-06-25T14:17:40.073513016Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jun 25 14:17:40.090133 containerd[1804]: time="2024-06-25T14:17:40.090075242Z" level=info msg="CreateContainer within sandbox \"24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 14:17:40.125097 containerd[1804]: 2024-06-25 14:17:39.993 [INFO][4401] k8s.go 608: Cleaning up netns ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:40.125097 containerd[1804]: 2024-06-25 14:17:39.993 [INFO][4401] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" iface="eth0" netns="/var/run/netns/cni-3c61f382-56a2-f333-739a-19500d52237c" Jun 25 14:17:40.125097 containerd[1804]: 2024-06-25 14:17:39.993 [INFO][4401] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" iface="eth0" netns="/var/run/netns/cni-3c61f382-56a2-f333-739a-19500d52237c" Jun 25 14:17:40.125097 containerd[1804]: 2024-06-25 14:17:39.994 [INFO][4401] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" iface="eth0" netns="/var/run/netns/cni-3c61f382-56a2-f333-739a-19500d52237c" Jun 25 14:17:40.125097 containerd[1804]: 2024-06-25 14:17:39.994 [INFO][4401] k8s.go 615: Releasing IP address(es) ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:40.125097 containerd[1804]: 2024-06-25 14:17:39.994 [INFO][4401] utils.go 188: Calico CNI releasing IP address ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:40.125097 containerd[1804]: 2024-06-25 14:17:40.100 [INFO][4418] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" HandleID="k8s-pod-network.4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:40.125097 containerd[1804]: 2024-06-25 14:17:40.101 [INFO][4418] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:40.125097 containerd[1804]: 2024-06-25 14:17:40.101 [INFO][4418] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:40.125097 containerd[1804]: 2024-06-25 14:17:40.115 [WARNING][4418] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" HandleID="k8s-pod-network.4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:40.125097 containerd[1804]: 2024-06-25 14:17:40.116 [INFO][4418] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" HandleID="k8s-pod-network.4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:40.125097 containerd[1804]: 2024-06-25 14:17:40.120 [INFO][4418] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:40.125097 containerd[1804]: 2024-06-25 14:17:40.122 [INFO][4401] k8s.go 621: Teardown processing complete. ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:40.149950 containerd[1804]: time="2024-06-25T14:17:40.134110252Z" level=info msg="TearDown network for sandbox \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\" successfully" Jun 25 14:17:40.149950 containerd[1804]: time="2024-06-25T14:17:40.134168957Z" level=info msg="StopPodSandbox for \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\" returns successfully" Jun 25 14:17:40.149950 containerd[1804]: time="2024-06-25T14:17:40.135487277Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sxhnd,Uid:7a3e59ff-5767-4d1a-8966-6d8e2be16aa6,Namespace:kube-system,Attempt:1,}" Jun 25 14:17:40.132154 systemd[1]: run-netns-cni\x2d3c61f382\x2d56a2\x2df333\x2d739a\x2d19500d52237c.mount: Deactivated successfully. Jun 25 14:17:40.165035 containerd[1804]: 2024-06-25 14:17:40.000 [INFO][4406] k8s.go 608: Cleaning up netns ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:40.165035 containerd[1804]: 2024-06-25 14:17:40.001 [INFO][4406] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" iface="eth0" netns="/var/run/netns/cni-161d9c1b-3c4a-33c3-84d0-40d5ef5f7647" Jun 25 14:17:40.165035 containerd[1804]: 2024-06-25 14:17:40.001 [INFO][4406] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" iface="eth0" netns="/var/run/netns/cni-161d9c1b-3c4a-33c3-84d0-40d5ef5f7647" Jun 25 14:17:40.165035 containerd[1804]: 2024-06-25 14:17:40.001 [INFO][4406] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" iface="eth0" netns="/var/run/netns/cni-161d9c1b-3c4a-33c3-84d0-40d5ef5f7647" Jun 25 14:17:40.165035 containerd[1804]: 2024-06-25 14:17:40.001 [INFO][4406] k8s.go 615: Releasing IP address(es) ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:40.165035 containerd[1804]: 2024-06-25 14:17:40.001 [INFO][4406] utils.go 188: Calico CNI releasing IP address ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:40.165035 containerd[1804]: 2024-06-25 14:17:40.138 [INFO][4419] ipam_plugin.go 411: Releasing address using handleID ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" HandleID="k8s-pod-network.3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Workload="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:40.165035 containerd[1804]: 2024-06-25 14:17:40.138 [INFO][4419] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:40.165035 containerd[1804]: 2024-06-25 14:17:40.138 [INFO][4419] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:40.165035 containerd[1804]: 2024-06-25 14:17:40.157 [WARNING][4419] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" HandleID="k8s-pod-network.3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Workload="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:40.165035 containerd[1804]: 2024-06-25 14:17:40.157 [INFO][4419] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" HandleID="k8s-pod-network.3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Workload="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:40.165035 containerd[1804]: 2024-06-25 14:17:40.160 [INFO][4419] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:40.165035 containerd[1804]: 2024-06-25 14:17:40.162 [INFO][4406] k8s.go 621: Teardown processing complete. ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:40.173438 containerd[1804]: time="2024-06-25T14:17:40.170550052Z" level=info msg="TearDown network for sandbox \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\" successfully" Jun 25 14:17:40.173438 containerd[1804]: time="2024-06-25T14:17:40.170606057Z" level=info msg="StopPodSandbox for \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\" returns successfully" Jun 25 14:17:40.173438 containerd[1804]: time="2024-06-25T14:17:40.172233334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f64868cd-fnncv,Uid:766af770-6e30-4ef9-b8c4-4b069a4bd63d,Namespace:calico-system,Attempt:1,}" Jun 25 14:17:40.169524 systemd[1]: run-netns-cni\x2d161d9c1b\x2d3c4a\x2d33c3\x2d84d0\x2d40d5ef5f7647.mount: Deactivated successfully. Jun 25 14:17:40.203322 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1879141652.mount: Deactivated successfully. Jun 25 14:17:40.238120 containerd[1804]: time="2024-06-25T14:17:40.238056555Z" level=info msg="CreateContainer within sandbox \"24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d092847c5bc04a066909750bf90ba2aaaac7c36f06cb94b968270019e2781b36\"" Jun 25 14:17:40.240126 containerd[1804]: time="2024-06-25T14:17:40.240067803Z" level=info msg="StartContainer for \"d092847c5bc04a066909750bf90ba2aaaac7c36f06cb94b968270019e2781b36\"" Jun 25 14:17:40.328985 systemd[1]: Started cri-containerd-d092847c5bc04a066909750bf90ba2aaaac7c36f06cb94b968270019e2781b36.scope - libcontainer container d092847c5bc04a066909750bf90ba2aaaac7c36f06cb94b968270019e2781b36. Jun 25 14:17:40.382000 audit: BPF prog-id=149 op=LOAD Jun 25 14:17:40.382000 audit[4450]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=4325 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:40.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393238343763356263303461303636393039373530626639306261 Jun 25 14:17:40.382000 audit: BPF prog-id=150 op=LOAD Jun 25 14:17:40.382000 audit[4450]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=4325 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:40.382000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393238343763356263303461303636393039373530626639306261 Jun 25 14:17:40.383000 audit: BPF prog-id=150 op=UNLOAD Jun 25 14:17:40.383000 audit: BPF prog-id=149 op=UNLOAD Jun 25 14:17:40.383000 audit: BPF prog-id=151 op=LOAD Jun 25 14:17:40.383000 audit[4450]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=4325 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:40.383000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6430393238343763356263303461303636393039373530626639306261 Jun 25 14:17:40.452506 containerd[1804]: time="2024-06-25T14:17:40.452445056Z" level=info msg="StartContainer for \"d092847c5bc04a066909750bf90ba2aaaac7c36f06cb94b968270019e2781b36\" returns successfully" Jun 25 14:17:40.458249 containerd[1804]: time="2024-06-25T14:17:40.457606540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 14:17:40.589927 systemd-networkd[1524]: cali99eaadcc254: Link UP Jun 25 14:17:40.597778 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:17:40.597918 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali99eaadcc254: link becomes ready Jun 25 14:17:40.597512 systemd-networkd[1524]: cali99eaadcc254: Gained carrier Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.403 [INFO][4454] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0 calico-kube-controllers-67f64868cd- calico-system 766af770-6e30-4ef9-b8c4-4b069a4bd63d 797 0 2024-06-25 14:17:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:67f64868cd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-245 calico-kube-controllers-67f64868cd-fnncv eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali99eaadcc254 [] []}} ContainerID="e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" Namespace="calico-system" Pod="calico-kube-controllers-67f64868cd-fnncv" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-" Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.404 [INFO][4454] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" Namespace="calico-system" Pod="calico-kube-controllers-67f64868cd-fnncv" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.500 [INFO][4487] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" HandleID="k8s-pod-network.e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" Workload="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.518 [INFO][4487] ipam_plugin.go 264: Auto assigning IP ContainerID="e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" HandleID="k8s-pod-network.e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" Workload="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c5f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-245", "pod":"calico-kube-controllers-67f64868cd-fnncv", "timestamp":"2024-06-25 14:17:40.500610204 +0000 UTC"}, Hostname:"ip-172-31-16-245", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.520 [INFO][4487] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.520 [INFO][4487] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.520 [INFO][4487] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-245' Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.524 [INFO][4487] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" host="ip-172-31-16-245" Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.535 [INFO][4487] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-245" Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.545 [INFO][4487] ipam.go 489: Trying affinity for 192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.548 [INFO][4487] ipam.go 155: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.552 [INFO][4487] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.553 [INFO][4487] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" host="ip-172-31-16-245" Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.555 [INFO][4487] ipam.go 1685: Creating new handle: k8s-pod-network.e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906 Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.561 [INFO][4487] ipam.go 1203: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" host="ip-172-31-16-245" Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.568 [INFO][4487] ipam.go 1216: Successfully claimed IPs: [192.168.51.194/26] block=192.168.51.192/26 handle="k8s-pod-network.e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" host="ip-172-31-16-245" Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.569 [INFO][4487] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.51.194/26] handle="k8s-pod-network.e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" host="ip-172-31-16-245" Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.569 [INFO][4487] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:40.646711 containerd[1804]: 2024-06-25 14:17:40.569 [INFO][4487] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.51.194/26] IPv6=[] ContainerID="e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" HandleID="k8s-pod-network.e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" Workload="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:40.648112 containerd[1804]: 2024-06-25 14:17:40.575 [INFO][4454] k8s.go 386: Populated endpoint ContainerID="e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" Namespace="calico-system" Pod="calico-kube-controllers-67f64868cd-fnncv" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0", GenerateName:"calico-kube-controllers-67f64868cd-", Namespace:"calico-system", SelfLink:"", UID:"766af770-6e30-4ef9-b8c4-4b069a4bd63d", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f64868cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"", Pod:"calico-kube-controllers-67f64868cd-fnncv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99eaadcc254", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:40.648112 containerd[1804]: 2024-06-25 14:17:40.575 [INFO][4454] k8s.go 387: Calico CNI using IPs: [192.168.51.194/32] ContainerID="e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" Namespace="calico-system" Pod="calico-kube-controllers-67f64868cd-fnncv" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:40.648112 containerd[1804]: 2024-06-25 14:17:40.575 [INFO][4454] dataplane_linux.go 68: Setting the host side veth name to cali99eaadcc254 ContainerID="e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" Namespace="calico-system" Pod="calico-kube-controllers-67f64868cd-fnncv" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:40.648112 containerd[1804]: 2024-06-25 14:17:40.600 [INFO][4454] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" Namespace="calico-system" Pod="calico-kube-controllers-67f64868cd-fnncv" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:40.648112 containerd[1804]: 2024-06-25 14:17:40.600 [INFO][4454] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" Namespace="calico-system" Pod="calico-kube-controllers-67f64868cd-fnncv" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0", GenerateName:"calico-kube-controllers-67f64868cd-", Namespace:"calico-system", SelfLink:"", UID:"766af770-6e30-4ef9-b8c4-4b069a4bd63d", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f64868cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906", Pod:"calico-kube-controllers-67f64868cd-fnncv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99eaadcc254", MAC:"c6:d9:10:f6:f5:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:40.648112 containerd[1804]: 2024-06-25 14:17:40.634 [INFO][4454] k8s.go 500: Wrote updated endpoint to datastore ContainerID="e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906" Namespace="calico-system" Pod="calico-kube-controllers-67f64868cd-fnncv" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:40.675208 systemd-networkd[1524]: cali2dcce3be896: Link UP Jun 25 14:17:40.683716 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali2dcce3be896: link becomes ready Jun 25 14:17:40.682169 systemd-networkd[1524]: cali2dcce3be896: Gained carrier Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.400 [INFO][4443] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0 coredns-7db6d8ff4d- kube-system 7a3e59ff-5767-4d1a-8966-6d8e2be16aa6 796 0 2024-06-25 14:17:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-245 coredns-7db6d8ff4d-sxhnd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2dcce3be896 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sxhnd" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-" Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.400 [INFO][4443] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sxhnd" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.526 [INFO][4488] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" HandleID="k8s-pod-network.a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.549 [INFO][4488] ipam_plugin.go 264: Auto assigning IP ContainerID="a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" HandleID="k8s-pod-network.a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400030e240), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-245", "pod":"coredns-7db6d8ff4d-sxhnd", "timestamp":"2024-06-25 14:17:40.526230479 +0000 UTC"}, Hostname:"ip-172-31-16-245", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.549 [INFO][4488] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.569 [INFO][4488] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.570 [INFO][4488] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-245' Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.573 [INFO][4488] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" host="ip-172-31-16-245" Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.589 [INFO][4488] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-245" Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.606 [INFO][4488] ipam.go 489: Trying affinity for 192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.611 [INFO][4488] ipam.go 155: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.614 [INFO][4488] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.615 [INFO][4488] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" host="ip-172-31-16-245" Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.628 [INFO][4488] ipam.go 1685: Creating new handle: k8s-pod-network.a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7 Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.643 [INFO][4488] ipam.go 1203: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" host="ip-172-31-16-245" Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.656 [INFO][4488] ipam.go 1216: Successfully claimed IPs: [192.168.51.195/26] block=192.168.51.192/26 handle="k8s-pod-network.a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" host="ip-172-31-16-245" Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.656 [INFO][4488] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.51.195/26] handle="k8s-pod-network.a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" host="ip-172-31-16-245" Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.657 [INFO][4488] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:40.715489 containerd[1804]: 2024-06-25 14:17:40.657 [INFO][4488] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.51.195/26] IPv6=[] ContainerID="a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" HandleID="k8s-pod-network.a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:40.717010 containerd[1804]: 2024-06-25 14:17:40.661 [INFO][4443] k8s.go 386: Populated endpoint ContainerID="a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sxhnd" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7a3e59ff-5767-4d1a-8966-6d8e2be16aa6", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"", Pod:"coredns-7db6d8ff4d-sxhnd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2dcce3be896", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:40.717010 containerd[1804]: 2024-06-25 14:17:40.662 [INFO][4443] k8s.go 387: Calico CNI using IPs: [192.168.51.195/32] ContainerID="a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sxhnd" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:40.717010 containerd[1804]: 2024-06-25 14:17:40.662 [INFO][4443] dataplane_linux.go 68: Setting the host side veth name to cali2dcce3be896 ContainerID="a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sxhnd" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:40.717010 containerd[1804]: 2024-06-25 14:17:40.685 [INFO][4443] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sxhnd" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:40.717010 containerd[1804]: 2024-06-25 14:17:40.686 [INFO][4443] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sxhnd" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7a3e59ff-5767-4d1a-8966-6d8e2be16aa6", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7", Pod:"coredns-7db6d8ff4d-sxhnd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2dcce3be896", MAC:"e6:e3:fb:40:fa:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:40.717010 containerd[1804]: 2024-06-25 14:17:40.712 [INFO][4443] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-sxhnd" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:40.729385 containerd[1804]: time="2024-06-25T14:17:40.728848475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:17:40.729385 containerd[1804]: time="2024-06-25T14:17:40.728952577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:40.729385 containerd[1804]: time="2024-06-25T14:17:40.729000866Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:17:40.729385 containerd[1804]: time="2024-06-25T14:17:40.729028046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:40.743000 audit[4548]: NETFILTER_CFG table=filter:102 family=2 entries=34 op=nft_register_chain pid=4548 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:17:40.743000 audit[4548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18640 a0=3 a1=ffffeae3fda0 a2=0 a3=ffff99173fa8 items=0 ppid=4077 pid=4548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:40.743000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:17:40.801514 systemd[1]: Started cri-containerd-e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906.scope - libcontainer container e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906. Jun 25 14:17:40.809931 containerd[1804]: time="2024-06-25T14:17:40.809550281Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:17:40.810546 containerd[1804]: time="2024-06-25T14:17:40.809748429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:40.810546 containerd[1804]: time="2024-06-25T14:17:40.809806894Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:17:40.810546 containerd[1804]: time="2024-06-25T14:17:40.809833762Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:40.822000 audit[4580]: NETFILTER_CFG table=filter:103 family=2 entries=42 op=nft_register_chain pid=4580 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:17:40.822000 audit[4580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21524 a0=3 a1=ffffdb037050 a2=0 a3=ffff910cafa8 items=0 ppid=4077 pid=4580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:40.822000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:17:40.851951 systemd[1]: Started cri-containerd-a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7.scope - libcontainer container a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7. Jun 25 14:17:40.855000 audit: BPF prog-id=152 op=LOAD Jun 25 14:17:40.856000 audit: BPF prog-id=153 op=LOAD Jun 25 14:17:40.856000 audit[4558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=4529 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:40.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531653637666434653766663464323262316135313133373033326632 Jun 25 14:17:40.856000 audit: BPF prog-id=154 op=LOAD Jun 25 14:17:40.856000 audit[4558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=4529 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:40.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531653637666434653766663464323262316135313133373033326632 Jun 25 14:17:40.856000 audit: BPF prog-id=154 op=UNLOAD Jun 25 14:17:40.857000 audit: BPF prog-id=153 op=UNLOAD Jun 25 14:17:40.857000 audit: BPF prog-id=155 op=LOAD Jun 25 14:17:40.857000 audit[4558]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=4529 pid=4558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:40.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6531653637666434653766663464323262316135313133373033326632 Jun 25 14:17:40.877000 audit: BPF prog-id=156 op=LOAD Jun 25 14:17:40.878000 audit: BPF prog-id=157 op=LOAD Jun 25 14:17:40.878000 audit[4582]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=4555 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:40.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363737613234623837666462613137316137613366333733356665 Jun 25 14:17:40.878000 audit: BPF prog-id=158 op=LOAD Jun 25 14:17:40.878000 audit[4582]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=4555 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:40.878000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363737613234623837666462613137316137613366333733356665 Jun 25 14:17:40.879000 audit: BPF prog-id=158 op=UNLOAD Jun 25 14:17:40.879000 audit: BPF prog-id=157 op=UNLOAD Jun 25 14:17:40.879000 audit: BPF prog-id=159 op=LOAD Jun 25 14:17:40.879000 audit[4582]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=4555 pid=4582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:40.879000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6131363737613234623837666462613137316137613366333733356665 Jun 25 14:17:40.937439 containerd[1804]: time="2024-06-25T14:17:40.937148812Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-sxhnd,Uid:7a3e59ff-5767-4d1a-8966-6d8e2be16aa6,Namespace:kube-system,Attempt:1,} returns sandbox id \"a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7\"" Jun 25 14:17:40.974378 containerd[1804]: time="2024-06-25T14:17:40.974308252Z" level=info msg="CreateContainer within sandbox \"a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:17:40.988679 containerd[1804]: time="2024-06-25T14:17:40.988596834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-67f64868cd-fnncv,Uid:766af770-6e30-4ef9-b8c4-4b069a4bd63d,Namespace:calico-system,Attempt:1,} returns sandbox id \"e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906\"" Jun 25 14:17:41.045331 containerd[1804]: time="2024-06-25T14:17:41.045248322Z" level=info msg="CreateContainer within sandbox \"a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2efd6910f6aa02af2507f54b0b02d55f2bb8a3573175afba76ac18bbb2598180\"" Jun 25 14:17:41.048351 containerd[1804]: time="2024-06-25T14:17:41.046616357Z" level=info msg="StartContainer for \"2efd6910f6aa02af2507f54b0b02d55f2bb8a3573175afba76ac18bbb2598180\"" Jun 25 14:17:41.099977 systemd[1]: Started cri-containerd-2efd6910f6aa02af2507f54b0b02d55f2bb8a3573175afba76ac18bbb2598180.scope - libcontainer container 2efd6910f6aa02af2507f54b0b02d55f2bb8a3573175afba76ac18bbb2598180. Jun 25 14:17:41.154000 audit: BPF prog-id=160 op=LOAD Jun 25 14:17:41.156000 audit: BPF prog-id=161 op=LOAD Jun 25 14:17:41.156000 audit[4624]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=4555 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:41.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265666436393130663661613032616632353037663534623062303264 Jun 25 14:17:41.156000 audit: BPF prog-id=162 op=LOAD Jun 25 14:17:41.156000 audit[4624]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=4555 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:41.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265666436393130663661613032616632353037663534623062303264 Jun 25 14:17:41.156000 audit: BPF prog-id=162 op=UNLOAD Jun 25 14:17:41.156000 audit: BPF prog-id=161 op=UNLOAD Jun 25 14:17:41.156000 audit: BPF prog-id=163 op=LOAD Jun 25 14:17:41.156000 audit[4624]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=4555 pid=4624 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:41.156000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265666436393130663661613032616632353037663534623062303264 Jun 25 14:17:41.191999 containerd[1804]: time="2024-06-25T14:17:41.191914395Z" level=info msg="StartContainer for \"2efd6910f6aa02af2507f54b0b02d55f2bb8a3573175afba76ac18bbb2598180\" returns successfully" Jun 25 14:17:41.784689 containerd[1804]: time="2024-06-25T14:17:41.784621599Z" level=info msg="StopPodSandbox for \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\"" Jun 25 14:17:41.799966 systemd-networkd[1524]: cali99eaadcc254: Gained IPv6LL Jun 25 14:17:41.991939 systemd-networkd[1524]: cali2dcce3be896: Gained IPv6LL Jun 25 14:17:42.010407 containerd[1804]: 2024-06-25 14:17:41.899 [INFO][4674] k8s.go 608: Cleaning up netns ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:42.010407 containerd[1804]: 2024-06-25 14:17:41.899 [INFO][4674] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" iface="eth0" netns="/var/run/netns/cni-6c3fe9ea-5fc7-3aea-3004-d5a4cc9b7297" Jun 25 14:17:42.010407 containerd[1804]: 2024-06-25 14:17:41.900 [INFO][4674] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" iface="eth0" netns="/var/run/netns/cni-6c3fe9ea-5fc7-3aea-3004-d5a4cc9b7297" Jun 25 14:17:42.010407 containerd[1804]: 2024-06-25 14:17:41.900 [INFO][4674] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" iface="eth0" netns="/var/run/netns/cni-6c3fe9ea-5fc7-3aea-3004-d5a4cc9b7297" Jun 25 14:17:42.010407 containerd[1804]: 2024-06-25 14:17:41.900 [INFO][4674] k8s.go 615: Releasing IP address(es) ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:42.010407 containerd[1804]: 2024-06-25 14:17:41.900 [INFO][4674] utils.go 188: Calico CNI releasing IP address ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:42.010407 containerd[1804]: 2024-06-25 14:17:41.981 [INFO][4680] ipam_plugin.go 411: Releasing address using handleID ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" HandleID="k8s-pod-network.398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:42.010407 containerd[1804]: 2024-06-25 14:17:41.982 [INFO][4680] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:42.010407 containerd[1804]: 2024-06-25 14:17:41.983 [INFO][4680] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:42.010407 containerd[1804]: 2024-06-25 14:17:42.001 [WARNING][4680] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" HandleID="k8s-pod-network.398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:42.010407 containerd[1804]: 2024-06-25 14:17:42.001 [INFO][4680] ipam_plugin.go 439: Releasing address using workloadID ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" HandleID="k8s-pod-network.398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:42.010407 containerd[1804]: 2024-06-25 14:17:42.004 [INFO][4680] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:42.010407 containerd[1804]: 2024-06-25 14:17:42.007 [INFO][4674] k8s.go 621: Teardown processing complete. ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:42.015713 containerd[1804]: time="2024-06-25T14:17:42.015113331Z" level=info msg="TearDown network for sandbox \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\" successfully" Jun 25 14:17:42.015713 containerd[1804]: time="2024-06-25T14:17:42.015183437Z" level=info msg="StopPodSandbox for \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\" returns successfully" Jun 25 14:17:42.016954 containerd[1804]: time="2024-06-25T14:17:42.016898098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nf7mv,Uid:394daa7f-b0b9-4121-9800-b5db47ff9611,Namespace:kube-system,Attempt:1,}" Jun 25 14:17:42.017713 systemd[1]: run-netns-cni\x2d6c3fe9ea\x2d5fc7\x2d3aea\x2d3004\x2dd5a4cc9b7297.mount: Deactivated successfully. Jun 25 14:17:42.024653 containerd[1804]: time="2024-06-25T14:17:42.024595592Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:42.027940 containerd[1804]: time="2024-06-25T14:17:42.027883275Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jun 25 14:17:42.033282 containerd[1804]: time="2024-06-25T14:17:42.033226630Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:42.046896 containerd[1804]: time="2024-06-25T14:17:42.045577610Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:42.056079 containerd[1804]: time="2024-06-25T14:17:42.056004815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:42.059579 containerd[1804]: time="2024-06-25T14:17:42.059497510Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 1.601804173s" Jun 25 14:17:42.059818 containerd[1804]: time="2024-06-25T14:17:42.059611776Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jun 25 14:17:42.061686 containerd[1804]: time="2024-06-25T14:17:42.061616158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 14:17:42.069972 containerd[1804]: time="2024-06-25T14:17:42.069913062Z" level=info msg="CreateContainer within sandbox \"24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 14:17:42.114327 containerd[1804]: time="2024-06-25T14:17:42.114262116Z" level=info msg="CreateContainer within sandbox \"24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"cfcf7e2994c49cca4463898269861d8d38e493dd873389568382e8d54c72e59f\"" Jun 25 14:17:42.122540 containerd[1804]: time="2024-06-25T14:17:42.122483071Z" level=info msg="StartContainer for \"cfcf7e2994c49cca4463898269861d8d38e493dd873389568382e8d54c72e59f\"" Jun 25 14:17:42.130393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4016545320.mount: Deactivated successfully. Jun 25 14:17:42.192824 kubelet[3044]: I0625 14:17:42.192088 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-sxhnd" podStartSLOduration=38.192040063 podStartE2EDuration="38.192040063s" podCreationTimestamp="2024-06-25 14:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:17:42.106531513 +0000 UTC m=+52.644707147" watchObservedRunningTime="2024-06-25 14:17:42.192040063 +0000 UTC m=+52.730215685" Jun 25 14:17:42.253055 systemd[1]: Started cri-containerd-cfcf7e2994c49cca4463898269861d8d38e493dd873389568382e8d54c72e59f.scope - libcontainer container cfcf7e2994c49cca4463898269861d8d38e493dd873389568382e8d54c72e59f. Jun 25 14:17:42.259000 audit[4717]: NETFILTER_CFG table=filter:104 family=2 entries=14 op=nft_register_rule pid=4717 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:42.259000 audit[4717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffcb795370 a2=0 a3=1 items=0 ppid=3185 pid=4717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.259000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:42.262000 audit[4717]: NETFILTER_CFG table=nat:105 family=2 entries=14 op=nft_register_rule pid=4717 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:42.262000 audit[4717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcb795370 a2=0 a3=1 items=0 ppid=3185 pid=4717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.262000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:42.320000 audit[4733]: NETFILTER_CFG table=filter:106 family=2 entries=11 op=nft_register_rule pid=4733 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:42.320000 audit[4733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffe65630f0 a2=0 a3=1 items=0 ppid=3185 pid=4733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.320000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:42.325000 audit[4733]: NETFILTER_CFG table=nat:107 family=2 entries=35 op=nft_register_chain pid=4733 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:42.325000 audit[4733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffe65630f0 a2=0 a3=1 items=0 ppid=3185 pid=4733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.325000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:42.346000 audit: BPF prog-id=164 op=LOAD Jun 25 14:17:42.346000 audit[4706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=4325 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.346000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366636637653239393463343963636134343633383938323639383631 Jun 25 14:17:42.347000 audit: BPF prog-id=165 op=LOAD Jun 25 14:17:42.347000 audit[4706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=4325 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366636637653239393463343963636134343633383938323639383631 Jun 25 14:17:42.347000 audit: BPF prog-id=165 op=UNLOAD Jun 25 14:17:42.347000 audit: BPF prog-id=164 op=UNLOAD Jun 25 14:17:42.347000 audit: BPF prog-id=166 op=LOAD Jun 25 14:17:42.347000 audit[4706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=4325 pid=4706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.347000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366636637653239393463343963636134343633383938323639383631 Jun 25 14:17:42.388921 containerd[1804]: time="2024-06-25T14:17:42.388859518Z" level=info msg="StartContainer for \"cfcf7e2994c49cca4463898269861d8d38e493dd873389568382e8d54c72e59f\" returns successfully" Jun 25 14:17:42.459559 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:17:42.459741 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4fac297f01c: link becomes ready Jun 25 14:17:42.459871 systemd-networkd[1524]: cali4fac297f01c: Link UP Jun 25 14:17:42.460314 systemd-networkd[1524]: cali4fac297f01c: Gained carrier Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.277 [INFO][4686] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0 coredns-7db6d8ff4d- kube-system 394daa7f-b0b9-4121-9800-b5db47ff9611 823 0 2024-06-25 14:17:04 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-245 coredns-7db6d8ff4d-nf7mv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4fac297f01c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nf7mv" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-" Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.278 [INFO][4686] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nf7mv" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.368 [INFO][4723] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" HandleID="k8s-pod-network.20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.392 [INFO][4723] ipam_plugin.go 264: Auto assigning IP ContainerID="20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" HandleID="k8s-pod-network.20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000304260), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-245", "pod":"coredns-7db6d8ff4d-nf7mv", "timestamp":"2024-06-25 14:17:42.367984373 +0000 UTC"}, Hostname:"ip-172-31-16-245", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.392 [INFO][4723] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.392 [INFO][4723] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.393 [INFO][4723] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-245' Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.396 [INFO][4723] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" host="ip-172-31-16-245" Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.404 [INFO][4723] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-245" Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.414 [INFO][4723] ipam.go 489: Trying affinity for 192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.419 [INFO][4723] ipam.go 155: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.428 [INFO][4723] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.428 [INFO][4723] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" host="ip-172-31-16-245" Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.431 [INFO][4723] ipam.go 1685: Creating new handle: k8s-pod-network.20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465 Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.437 [INFO][4723] ipam.go 1203: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" host="ip-172-31-16-245" Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.449 [INFO][4723] ipam.go 1216: Successfully claimed IPs: [192.168.51.196/26] block=192.168.51.192/26 handle="k8s-pod-network.20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" host="ip-172-31-16-245" Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.449 [INFO][4723] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.51.196/26] handle="k8s-pod-network.20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" host="ip-172-31-16-245" Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.449 [INFO][4723] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:42.492719 containerd[1804]: 2024-06-25 14:17:42.449 [INFO][4723] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.51.196/26] IPv6=[] ContainerID="20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" HandleID="k8s-pod-network.20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:42.493957 containerd[1804]: 2024-06-25 14:17:42.452 [INFO][4686] k8s.go 386: Populated endpoint ContainerID="20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nf7mv" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"394daa7f-b0b9-4121-9800-b5db47ff9611", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"", Pod:"coredns-7db6d8ff4d-nf7mv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fac297f01c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:42.493957 containerd[1804]: 2024-06-25 14:17:42.453 [INFO][4686] k8s.go 387: Calico CNI using IPs: [192.168.51.196/32] ContainerID="20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nf7mv" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:42.493957 containerd[1804]: 2024-06-25 14:17:42.453 [INFO][4686] dataplane_linux.go 68: Setting the host side veth name to cali4fac297f01c ContainerID="20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nf7mv" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:42.493957 containerd[1804]: 2024-06-25 14:17:42.461 [INFO][4686] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nf7mv" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:42.493957 containerd[1804]: 2024-06-25 14:17:42.465 [INFO][4686] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nf7mv" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"394daa7f-b0b9-4121-9800-b5db47ff9611", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465", Pod:"coredns-7db6d8ff4d-nf7mv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fac297f01c", MAC:"d2:c8:56:c0:92:49", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:42.493957 containerd[1804]: 2024-06-25 14:17:42.483 [INFO][4686] k8s.go 500: Wrote updated endpoint to datastore ContainerID="20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465" Namespace="kube-system" Pod="coredns-7db6d8ff4d-nf7mv" WorkloadEndpoint="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:42.530000 audit[4773]: NETFILTER_CFG table=filter:108 family=2 entries=38 op=nft_register_chain pid=4773 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:17:42.530000 audit[4773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19408 a0=3 a1=ffffc753cc10 a2=0 a3=ffffb81aafa8 items=0 ppid=4077 pid=4773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.530000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:17:42.537746 containerd[1804]: time="2024-06-25T14:17:42.537555856Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:17:42.537925 containerd[1804]: time="2024-06-25T14:17:42.537768584Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:42.537925 containerd[1804]: time="2024-06-25T14:17:42.537819920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:17:42.537925 containerd[1804]: time="2024-06-25T14:17:42.537855801Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:42.587804 systemd[1]: Started cri-containerd-20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465.scope - libcontainer container 20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465. Jun 25 14:17:42.615000 audit: BPF prog-id=167 op=LOAD Jun 25 14:17:42.615000 audit: BPF prog-id=168 op=LOAD Jun 25 14:17:42.615000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=4776 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.615000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230613963353565663966313265623235646664396363346437396136 Jun 25 14:17:42.616000 audit: BPF prog-id=169 op=LOAD Jun 25 14:17:42.616000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=4776 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.616000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230613963353565663966313265623235646664396363346437396136 Jun 25 14:17:42.617000 audit: BPF prog-id=169 op=UNLOAD Jun 25 14:17:42.617000 audit: BPF prog-id=168 op=UNLOAD Jun 25 14:17:42.617000 audit: BPF prog-id=170 op=LOAD Jun 25 14:17:42.617000 audit[4786]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=4776 pid=4786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230613963353565663966313265623235646664396363346437396136 Jun 25 14:17:42.627203 kubelet[3044]: I0625 14:17:42.627146 3044 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 25 14:17:42.675495 containerd[1804]: time="2024-06-25T14:17:42.675195963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-nf7mv,Uid:394daa7f-b0b9-4121-9800-b5db47ff9611,Namespace:kube-system,Attempt:1,} returns sandbox id \"20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465\"" Jun 25 14:17:42.685556 containerd[1804]: time="2024-06-25T14:17:42.685496901Z" level=info msg="CreateContainer within sandbox \"20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:17:42.815394 containerd[1804]: time="2024-06-25T14:17:42.815328536Z" level=info msg="CreateContainer within sandbox \"20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e57fb2921e9306a1264f04b41ed83dd7da672ac53c0175cc425c2c27d220c38\"" Jun 25 14:17:42.818906 containerd[1804]: time="2024-06-25T14:17:42.818848855Z" level=info msg="StartContainer for \"7e57fb2921e9306a1264f04b41ed83dd7da672ac53c0175cc425c2c27d220c38\"" Jun 25 14:17:42.887028 systemd[1]: Started cri-containerd-7e57fb2921e9306a1264f04b41ed83dd7da672ac53c0175cc425c2c27d220c38.scope - libcontainer container 7e57fb2921e9306a1264f04b41ed83dd7da672ac53c0175cc425c2c27d220c38. Jun 25 14:17:42.912000 audit: BPF prog-id=171 op=LOAD Jun 25 14:17:42.913000 audit: BPF prog-id=172 op=LOAD Jun 25 14:17:42.913000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001318b0 a2=78 a3=0 items=0 ppid=4776 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765353766623239323165393330366131323634663034623431656438 Jun 25 14:17:42.913000 audit: BPF prog-id=173 op=LOAD Jun 25 14:17:42.913000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000131640 a2=78 a3=0 items=0 ppid=4776 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765353766623239323165393330366131323634663034623431656438 Jun 25 14:17:42.913000 audit: BPF prog-id=173 op=UNLOAD Jun 25 14:17:42.913000 audit: BPF prog-id=172 op=UNLOAD Jun 25 14:17:42.913000 audit: BPF prog-id=174 op=LOAD Jun 25 14:17:42.913000 audit[4860]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000131b10 a2=78 a3=0 items=0 ppid=4776 pid=4860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:42.913000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765353766623239323165393330366131323634663034623431656438 Jun 25 14:17:42.957039 containerd[1804]: time="2024-06-25T14:17:42.956965887Z" level=info msg="StartContainer for \"7e57fb2921e9306a1264f04b41ed83dd7da672ac53c0175cc425c2c27d220c38\" returns successfully" Jun 25 14:17:43.046575 kubelet[3044]: I0625 14:17:43.046528 3044 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 14:17:43.046575 kubelet[3044]: I0625 14:17:43.046579 3044 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 14:17:43.151217 kubelet[3044]: I0625 14:17:43.151032 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-6sfl8" podStartSLOduration=26.649757907 podStartE2EDuration="30.151007618s" podCreationTimestamp="2024-06-25 14:17:13 +0000 UTC" firstStartedPulling="2024-06-25 14:17:38.560209508 +0000 UTC m=+49.098385106" lastFinishedPulling="2024-06-25 14:17:42.061459207 +0000 UTC m=+52.599634817" observedRunningTime="2024-06-25 14:17:43.109117099 +0000 UTC m=+53.647292745" watchObservedRunningTime="2024-06-25 14:17:43.151007618 +0000 UTC m=+53.689183228" Jun 25 14:17:43.173000 audit[4896]: NETFILTER_CFG table=filter:109 family=2 entries=8 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:43.173000 audit[4896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd12f8fa0 a2=0 a3=1 items=0 ppid=3185 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:43.173000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:43.176000 audit[4896]: NETFILTER_CFG table=nat:110 family=2 entries=44 op=nft_register_rule pid=4896 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:43.176000 audit[4896]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd12f8fa0 a2=0 a3=1 items=0 ppid=3185 pid=4896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:43.176000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:43.655892 systemd-networkd[1524]: cali4fac297f01c: Gained IPv6LL Jun 25 14:17:44.112485 kubelet[3044]: I0625 14:17:44.112381 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-nf7mv" podStartSLOduration=40.112357628 podStartE2EDuration="40.112357628s" podCreationTimestamp="2024-06-25 14:17:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:17:43.152947606 +0000 UTC m=+53.691123288" watchObservedRunningTime="2024-06-25 14:17:44.112357628 +0000 UTC m=+54.650533262" Jun 25 14:17:44.170990 kernel: kauditd_printk_skb: 125 callbacks suppressed Jun 25 14:17:44.171150 kernel: audit: type=1325 audit(1719325064.165:609): table=filter:111 family=2 entries=8 op=nft_register_rule pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:44.165000 audit[4899]: NETFILTER_CFG table=filter:111 family=2 entries=8 op=nft_register_rule pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:44.165000 audit[4899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd1c7b780 a2=0 a3=1 items=0 ppid=3185 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:44.184293 kernel: audit: type=1300 audit(1719325064.165:609): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd1c7b780 a2=0 a3=1 items=0 ppid=3185 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:44.184401 kernel: audit: type=1327 audit(1719325064.165:609): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:44.165000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:44.188000 audit[4899]: NETFILTER_CFG table=nat:112 family=2 entries=56 op=nft_register_chain pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:44.188000 audit[4899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffd1c7b780 a2=0 a3=1 items=0 ppid=3185 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:44.200962 kernel: audit: type=1325 audit(1719325064.188:610): table=nat:112 family=2 entries=56 op=nft_register_chain pid=4899 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:44.201133 kernel: audit: type=1300 audit(1719325064.188:610): arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffd1c7b780 a2=0 a3=1 items=0 ppid=3185 pid=4899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:44.188000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:44.204264 kernel: audit: type=1327 audit(1719325064.188:610): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:44.332991 systemd[1]: Started sshd@9-172.31.16.245:22-139.178.68.195:39438.service - OpenSSH per-connection server daemon (139.178.68.195:39438). Jun 25 14:17:44.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.16.245:22-139.178.68.195:39438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:44.338710 kernel: audit: type=1130 audit(1719325064.331:611): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.16.245:22-139.178.68.195:39438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:44.501248 sshd[4902]: Accepted publickey for core from 139.178.68.195 port 39438 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:17:44.499000 audit[4902]: USER_ACCT pid=4902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:44.514646 kernel: audit: type=1101 audit(1719325064.499:612): pid=4902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:44.514870 kernel: audit: type=1103 audit(1719325064.507:613): pid=4902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:44.507000 audit[4902]: CRED_ACQ pid=4902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:44.516614 sshd[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:17:44.520552 kernel: audit: type=1006 audit(1719325064.508:614): pid=4902 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 14:17:44.508000 audit[4902]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe6f758f0 a2=3 a3=1 items=0 ppid=1 pid=4902 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:44.508000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:17:44.531569 systemd-logind[1794]: New session 10 of user core. Jun 25 14:17:44.537996 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 14:17:44.549000 audit[4902]: USER_START pid=4902 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:44.552000 audit[4904]: CRED_ACQ pid=4904 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:44.828590 sshd[4902]: pam_unix(sshd:session): session closed for user core Jun 25 14:17:44.832000 audit[4902]: USER_END pid=4902 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:44.832000 audit[4902]: CRED_DISP pid=4902 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:44.835991 systemd-logind[1794]: Session 10 logged out. Waiting for processes to exit. Jun 25 14:17:44.836579 systemd[1]: sshd@9-172.31.16.245:22-139.178.68.195:39438.service: Deactivated successfully. Jun 25 14:17:44.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.16.245:22-139.178.68.195:39438 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:44.838278 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 14:17:44.840574 systemd-logind[1794]: Removed session 10. Jun 25 14:17:45.874000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:45.874000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6b a1=400ca79980 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:17:45.874000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:17:45.875000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=185 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:45.875000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6b a1=400ca799b0 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:17:45.875000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:17:45.875000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:45.875000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6b a1=400ca799e0 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:17:45.875000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:17:45.896000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:45.896000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6b a1=400a5d5e20 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:17:45.896000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:17:45.921000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:45.921000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40014b5680 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:17:45.921000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:17:45.925000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:45.925000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000e2fbe0 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:17:45.925000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:17:46.027000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:46.027000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6b a1=400cc85d10 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:17:46.027000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:17:46.028000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:17:46.028000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=6b a1=400ac1c4c0 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:17:46.028000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:17:47.087126 containerd[1804]: time="2024-06-25T14:17:47.087057239Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:47.091390 containerd[1804]: time="2024-06-25T14:17:47.091322006Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jun 25 14:17:47.093335 containerd[1804]: time="2024-06-25T14:17:47.093262867Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:47.102634 containerd[1804]: time="2024-06-25T14:17:47.102565071Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:47.106368 containerd[1804]: time="2024-06-25T14:17:47.106294271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:47.107788 containerd[1804]: time="2024-06-25T14:17:47.107731544Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 5.045850086s" Jun 25 14:17:47.108009 containerd[1804]: time="2024-06-25T14:17:47.107972136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jun 25 14:17:47.142636 containerd[1804]: time="2024-06-25T14:17:47.142567658Z" level=info msg="CreateContainer within sandbox \"e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 14:17:47.173459 containerd[1804]: time="2024-06-25T14:17:47.173396048Z" level=info msg="CreateContainer within sandbox \"e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"4eba7e98af4fc55adef81bc45ca3b681d89902e80af2ccb59b0924dbcc60a8e6\"" Jun 25 14:17:47.174689 containerd[1804]: time="2024-06-25T14:17:47.174606314Z" level=info msg="StartContainer for \"4eba7e98af4fc55adef81bc45ca3b681d89902e80af2ccb59b0924dbcc60a8e6\"" Jun 25 14:17:47.251026 systemd[1]: Started cri-containerd-4eba7e98af4fc55adef81bc45ca3b681d89902e80af2ccb59b0924dbcc60a8e6.scope - libcontainer container 4eba7e98af4fc55adef81bc45ca3b681d89902e80af2ccb59b0924dbcc60a8e6. Jun 25 14:17:47.285000 audit: BPF prog-id=175 op=LOAD Jun 25 14:17:47.286000 audit: BPF prog-id=176 op=LOAD Jun 25 14:17:47.286000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001b18b0 a2=78 a3=0 items=0 ppid=4529 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:47.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465626137653938616634666335356164656638316263343563613362 Jun 25 14:17:47.286000 audit: BPF prog-id=177 op=LOAD Jun 25 14:17:47.286000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001b1640 a2=78 a3=0 items=0 ppid=4529 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:47.286000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465626137653938616634666335356164656638316263343563613362 Jun 25 14:17:47.286000 audit: BPF prog-id=177 op=UNLOAD Jun 25 14:17:47.287000 audit: BPF prog-id=176 op=UNLOAD Jun 25 14:17:47.287000 audit: BPF prog-id=178 op=LOAD Jun 25 14:17:47.287000 audit[4932]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001b1b10 a2=78 a3=0 items=0 ppid=4529 pid=4932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:47.287000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465626137653938616634666335356164656638316263343563613362 Jun 25 14:17:47.370333 containerd[1804]: time="2024-06-25T14:17:47.370270217Z" level=info msg="StartContainer for \"4eba7e98af4fc55adef81bc45ca3b681d89902e80af2ccb59b0924dbcc60a8e6\" returns successfully" Jun 25 14:17:48.248921 kubelet[3044]: I0625 14:17:48.248729 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-67f64868cd-fnncv" podStartSLOduration=29.130170625 podStartE2EDuration="35.248703435s" podCreationTimestamp="2024-06-25 14:17:13 +0000 UTC" firstStartedPulling="2024-06-25 14:17:40.990874559 +0000 UTC m=+51.529050169" lastFinishedPulling="2024-06-25 14:17:47.109407369 +0000 UTC m=+57.647582979" observedRunningTime="2024-06-25 14:17:48.144611968 +0000 UTC m=+58.682787602" watchObservedRunningTime="2024-06-25 14:17:48.248703435 +0000 UTC m=+58.786879081" Jun 25 14:17:49.745446 containerd[1804]: time="2024-06-25T14:17:49.745365376Z" level=info msg="StopPodSandbox for \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\"" Jun 25 14:17:49.879595 systemd[1]: Started sshd@10-172.31.16.245:22-139.178.68.195:48738.service - OpenSSH per-connection server daemon (139.178.68.195:48738). Jun 25 14:17:49.885719 kernel: kauditd_printk_skb: 43 callbacks suppressed Jun 25 14:17:49.885807 kernel: audit: type=1130 audit(1719325069.879:634): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.16.245:22-139.178.68.195:48738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:49.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.16.245:22-139.178.68.195:48738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:49.931852 containerd[1804]: 2024-06-25 14:17:49.843 [WARNING][5002] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0eb91a0-b29d-4d99-bc83-5df8975b23bb", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1", Pod:"csi-node-driver-6sfl8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali162e1e04848", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:49.931852 containerd[1804]: 2024-06-25 14:17:49.843 [INFO][5002] k8s.go 608: Cleaning up netns ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:49.931852 containerd[1804]: 2024-06-25 14:17:49.843 [INFO][5002] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" iface="eth0" netns="" Jun 25 14:17:49.931852 containerd[1804]: 2024-06-25 14:17:49.843 [INFO][5002] k8s.go 615: Releasing IP address(es) ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:49.931852 containerd[1804]: 2024-06-25 14:17:49.843 [INFO][5002] utils.go 188: Calico CNI releasing IP address ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:49.931852 containerd[1804]: 2024-06-25 14:17:49.909 [INFO][5010] ipam_plugin.go 411: Releasing address using handleID ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" HandleID="k8s-pod-network.e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Workload="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:49.931852 containerd[1804]: 2024-06-25 14:17:49.909 [INFO][5010] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:49.931852 containerd[1804]: 2024-06-25 14:17:49.909 [INFO][5010] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:49.931852 containerd[1804]: 2024-06-25 14:17:49.923 [WARNING][5010] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" HandleID="k8s-pod-network.e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Workload="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:49.931852 containerd[1804]: 2024-06-25 14:17:49.923 [INFO][5010] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" HandleID="k8s-pod-network.e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Workload="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:49.931852 containerd[1804]: 2024-06-25 14:17:49.927 [INFO][5010] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:49.931852 containerd[1804]: 2024-06-25 14:17:49.929 [INFO][5002] k8s.go 621: Teardown processing complete. ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:49.933009 containerd[1804]: time="2024-06-25T14:17:49.932953820Z" level=info msg="TearDown network for sandbox \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\" successfully" Jun 25 14:17:49.933186 containerd[1804]: time="2024-06-25T14:17:49.933149951Z" level=info msg="StopPodSandbox for \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\" returns successfully" Jun 25 14:17:49.934214 containerd[1804]: time="2024-06-25T14:17:49.934141921Z" level=info msg="RemovePodSandbox for \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\"" Jun 25 14:17:49.934793 containerd[1804]: time="2024-06-25T14:17:49.934625924Z" level=info msg="Forcibly stopping sandbox \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\"" Jun 25 14:17:50.082650 kernel: audit: type=1101 audit(1719325070.075:635): pid=5015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.075000 audit[5015]: USER_ACCT pid=5015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.088460 sshd[5015]: Accepted publickey for core from 139.178.68.195 port 48738 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:17:50.096734 kernel: audit: type=1103 audit(1719325070.089:636): pid=5015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.089000 audit[5015]: CRED_ACQ pid=5015 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.101710 kernel: audit: type=1006 audit(1719325070.096:637): pid=5015 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jun 25 14:17:50.096000 audit[5015]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4d753b0 a2=3 a3=1 items=0 ppid=1 pid=5015 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:50.109166 sshd[5015]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:17:50.114643 kernel: audit: type=1300 audit(1719325070.096:637): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4d753b0 a2=3 a3=1 items=0 ppid=1 pid=5015 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:50.114767 kernel: audit: type=1327 audit(1719325070.096:637): proctitle=737368643A20636F7265205B707269765D Jun 25 14:17:50.096000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:17:50.129749 systemd-logind[1794]: New session 11 of user core. Jun 25 14:17:50.132009 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 14:17:50.154000 audit[5015]: USER_START pid=5015 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.158000 audit[5042]: CRED_ACQ pid=5042 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.165509 kernel: audit: type=1105 audit(1719325070.154:638): pid=5015 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.167598 kernel: audit: type=1103 audit(1719325070.158:639): pid=5042 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.199141 containerd[1804]: 2024-06-25 14:17:50.068 [WARNING][5032] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f0eb91a0-b29d-4d99-bc83-5df8975b23bb", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"6cc9df58f4", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"24a127ce7f856af43b66c375f27e9cafd6330596d519817f403828de4c4589f1", Pod:"csi-node-driver-6sfl8", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.51.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali162e1e04848", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:50.199141 containerd[1804]: 2024-06-25 14:17:50.069 [INFO][5032] k8s.go 608: Cleaning up netns ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:50.199141 containerd[1804]: 2024-06-25 14:17:50.069 [INFO][5032] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" iface="eth0" netns="" Jun 25 14:17:50.199141 containerd[1804]: 2024-06-25 14:17:50.069 [INFO][5032] k8s.go 615: Releasing IP address(es) ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:50.199141 containerd[1804]: 2024-06-25 14:17:50.069 [INFO][5032] utils.go 188: Calico CNI releasing IP address ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:50.199141 containerd[1804]: 2024-06-25 14:17:50.177 [INFO][5038] ipam_plugin.go 411: Releasing address using handleID ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" HandleID="k8s-pod-network.e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Workload="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:50.199141 containerd[1804]: 2024-06-25 14:17:50.178 [INFO][5038] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:50.199141 containerd[1804]: 2024-06-25 14:17:50.178 [INFO][5038] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:50.199141 containerd[1804]: 2024-06-25 14:17:50.191 [WARNING][5038] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" HandleID="k8s-pod-network.e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Workload="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:50.199141 containerd[1804]: 2024-06-25 14:17:50.191 [INFO][5038] ipam_plugin.go 439: Releasing address using workloadID ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" HandleID="k8s-pod-network.e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Workload="ip--172--31--16--245-k8s-csi--node--driver--6sfl8-eth0" Jun 25 14:17:50.199141 containerd[1804]: 2024-06-25 14:17:50.193 [INFO][5038] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:50.199141 containerd[1804]: 2024-06-25 14:17:50.196 [INFO][5032] k8s.go 621: Teardown processing complete. ContainerID="e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce" Jun 25 14:17:50.200272 containerd[1804]: time="2024-06-25T14:17:50.200200201Z" level=info msg="TearDown network for sandbox \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\" successfully" Jun 25 14:17:50.224626 containerd[1804]: time="2024-06-25T14:17:50.224571845Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:17:50.225029 containerd[1804]: time="2024-06-25T14:17:50.224975399Z" level=info msg="RemovePodSandbox \"e9244e0174dcc76fb2d5f0cf9efe179a5c7c17b20b951f7c87a2d4f483f0fdce\" returns successfully" Jun 25 14:17:50.226195 containerd[1804]: time="2024-06-25T14:17:50.226121727Z" level=info msg="StopPodSandbox for \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\"" Jun 25 14:17:50.532513 sshd[5015]: pam_unix(sshd:session): session closed for user core Jun 25 14:17:50.533000 audit[5015]: USER_END pid=5015 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.543504 systemd-logind[1794]: Session 11 logged out. Waiting for processes to exit. Jun 25 14:17:50.548854 kernel: audit: type=1106 audit(1719325070.533:640): pid=5015 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.548963 kernel: audit: type=1104 audit(1719325070.534:641): pid=5015 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.534000 audit[5015]: CRED_DISP pid=5015 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.543846 systemd[1]: sshd@10-172.31.16.245:22-139.178.68.195:48738.service: Deactivated successfully. Jun 25 14:17:50.545142 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 14:17:50.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.16.245:22-139.178.68.195:48738 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:50.563879 systemd-logind[1794]: Removed session 11. Jun 25 14:17:50.571394 systemd[1]: Started sshd@11-172.31.16.245:22-139.178.68.195:48742.service - OpenSSH per-connection server daemon (139.178.68.195:48742). Jun 25 14:17:50.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.16.245:22-139.178.68.195:48742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:50.594301 containerd[1804]: 2024-06-25 14:17:50.456 [WARNING][5060] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0", GenerateName:"calico-kube-controllers-67f64868cd-", Namespace:"calico-system", SelfLink:"", UID:"766af770-6e30-4ef9-b8c4-4b069a4bd63d", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f64868cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906", Pod:"calico-kube-controllers-67f64868cd-fnncv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99eaadcc254", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:50.594301 containerd[1804]: 2024-06-25 14:17:50.456 [INFO][5060] k8s.go 608: Cleaning up netns ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:50.594301 containerd[1804]: 2024-06-25 14:17:50.457 [INFO][5060] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" iface="eth0" netns="" Jun 25 14:17:50.594301 containerd[1804]: 2024-06-25 14:17:50.457 [INFO][5060] k8s.go 615: Releasing IP address(es) ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:50.594301 containerd[1804]: 2024-06-25 14:17:50.457 [INFO][5060] utils.go 188: Calico CNI releasing IP address ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:50.594301 containerd[1804]: 2024-06-25 14:17:50.537 [INFO][5077] ipam_plugin.go 411: Releasing address using handleID ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" HandleID="k8s-pod-network.3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Workload="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:50.594301 containerd[1804]: 2024-06-25 14:17:50.537 [INFO][5077] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:50.594301 containerd[1804]: 2024-06-25 14:17:50.537 [INFO][5077] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:50.594301 containerd[1804]: 2024-06-25 14:17:50.568 [WARNING][5077] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" HandleID="k8s-pod-network.3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Workload="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:50.594301 containerd[1804]: 2024-06-25 14:17:50.568 [INFO][5077] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" HandleID="k8s-pod-network.3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Workload="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:50.594301 containerd[1804]: 2024-06-25 14:17:50.580 [INFO][5077] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:50.594301 containerd[1804]: 2024-06-25 14:17:50.591 [INFO][5060] k8s.go 621: Teardown processing complete. ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:50.595427 containerd[1804]: time="2024-06-25T14:17:50.595368070Z" level=info msg="TearDown network for sandbox \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\" successfully" Jun 25 14:17:50.595581 containerd[1804]: time="2024-06-25T14:17:50.595544520Z" level=info msg="StopPodSandbox for \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\" returns successfully" Jun 25 14:17:50.596429 containerd[1804]: time="2024-06-25T14:17:50.596365488Z" level=info msg="RemovePodSandbox for \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\"" Jun 25 14:17:50.596575 containerd[1804]: time="2024-06-25T14:17:50.596432689Z" level=info msg="Forcibly stopping sandbox \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\"" Jun 25 14:17:50.753000 audit[5085]: USER_ACCT pid=5085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.754328 sshd[5085]: Accepted publickey for core from 139.178.68.195 port 48742 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:17:50.756000 audit[5085]: CRED_ACQ pid=5085 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.756000 audit[5085]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd3963170 a2=3 a3=1 items=0 ppid=1 pid=5085 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:50.756000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:17:50.758079 sshd[5085]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:17:50.773062 systemd-logind[1794]: New session 12 of user core. Jun 25 14:17:50.778021 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 14:17:50.793000 audit[5085]: USER_START pid=5085 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.797000 audit[5112]: CRED_ACQ pid=5112 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:50.813186 containerd[1804]: 2024-06-25 14:17:50.704 [WARNING][5100] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0", GenerateName:"calico-kube-controllers-67f64868cd-", Namespace:"calico-system", SelfLink:"", UID:"766af770-6e30-4ef9-b8c4-4b069a4bd63d", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"67f64868cd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"e1e67fd4e7ff4d22b1a51137032f23486ad5c2f4a4ccf42e39dc836e83511906", Pod:"calico-kube-controllers-67f64868cd-fnncv", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.51.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali99eaadcc254", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:50.813186 containerd[1804]: 2024-06-25 14:17:50.705 [INFO][5100] k8s.go 608: Cleaning up netns ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:50.813186 containerd[1804]: 2024-06-25 14:17:50.705 [INFO][5100] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" iface="eth0" netns="" Jun 25 14:17:50.813186 containerd[1804]: 2024-06-25 14:17:50.705 [INFO][5100] k8s.go 615: Releasing IP address(es) ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:50.813186 containerd[1804]: 2024-06-25 14:17:50.705 [INFO][5100] utils.go 188: Calico CNI releasing IP address ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:50.813186 containerd[1804]: 2024-06-25 14:17:50.779 [INFO][5107] ipam_plugin.go 411: Releasing address using handleID ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" HandleID="k8s-pod-network.3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Workload="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:50.813186 containerd[1804]: 2024-06-25 14:17:50.781 [INFO][5107] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:50.813186 containerd[1804]: 2024-06-25 14:17:50.781 [INFO][5107] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:50.813186 containerd[1804]: 2024-06-25 14:17:50.805 [WARNING][5107] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" HandleID="k8s-pod-network.3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Workload="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:50.813186 containerd[1804]: 2024-06-25 14:17:50.805 [INFO][5107] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" HandleID="k8s-pod-network.3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Workload="ip--172--31--16--245-k8s-calico--kube--controllers--67f64868cd--fnncv-eth0" Jun 25 14:17:50.813186 containerd[1804]: 2024-06-25 14:17:50.808 [INFO][5107] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:50.813186 containerd[1804]: 2024-06-25 14:17:50.810 [INFO][5100] k8s.go 621: Teardown processing complete. ContainerID="3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2" Jun 25 14:17:50.814506 containerd[1804]: time="2024-06-25T14:17:50.813238587Z" level=info msg="TearDown network for sandbox \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\" successfully" Jun 25 14:17:50.824281 containerd[1804]: time="2024-06-25T14:17:50.824213209Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:17:50.824483 containerd[1804]: time="2024-06-25T14:17:50.824384463Z" level=info msg="RemovePodSandbox \"3dd9156c5b7688058cfa9e949a34735d260c0e1868ae2bf488d3d25ff6c002d2\" returns successfully" Jun 25 14:17:50.826381 containerd[1804]: time="2024-06-25T14:17:50.826170616Z" level=info msg="StopPodSandbox for \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\"" Jun 25 14:17:51.010027 containerd[1804]: 2024-06-25 14:17:50.925 [WARNING][5126] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"394daa7f-b0b9-4121-9800-b5db47ff9611", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465", Pod:"coredns-7db6d8ff4d-nf7mv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fac297f01c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:51.010027 containerd[1804]: 2024-06-25 14:17:50.926 [INFO][5126] k8s.go 608: Cleaning up netns ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:51.010027 containerd[1804]: 2024-06-25 14:17:50.926 [INFO][5126] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" iface="eth0" netns="" Jun 25 14:17:51.010027 containerd[1804]: 2024-06-25 14:17:50.926 [INFO][5126] k8s.go 615: Releasing IP address(es) ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:51.010027 containerd[1804]: 2024-06-25 14:17:50.927 [INFO][5126] utils.go 188: Calico CNI releasing IP address ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:51.010027 containerd[1804]: 2024-06-25 14:17:50.985 [INFO][5137] ipam_plugin.go 411: Releasing address using handleID ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" HandleID="k8s-pod-network.398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:51.010027 containerd[1804]: 2024-06-25 14:17:50.986 [INFO][5137] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:51.010027 containerd[1804]: 2024-06-25 14:17:50.986 [INFO][5137] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:51.010027 containerd[1804]: 2024-06-25 14:17:51.001 [WARNING][5137] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" HandleID="k8s-pod-network.398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:51.010027 containerd[1804]: 2024-06-25 14:17:51.001 [INFO][5137] ipam_plugin.go 439: Releasing address using workloadID ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" HandleID="k8s-pod-network.398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:51.010027 containerd[1804]: 2024-06-25 14:17:51.004 [INFO][5137] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:51.010027 containerd[1804]: 2024-06-25 14:17:51.007 [INFO][5126] k8s.go 621: Teardown processing complete. ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:51.011234 containerd[1804]: time="2024-06-25T14:17:51.011175515Z" level=info msg="TearDown network for sandbox \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\" successfully" Jun 25 14:17:51.011377 containerd[1804]: time="2024-06-25T14:17:51.011343518Z" level=info msg="StopPodSandbox for \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\" returns successfully" Jun 25 14:17:51.012496 containerd[1804]: time="2024-06-25T14:17:51.012451661Z" level=info msg="RemovePodSandbox for \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\"" Jun 25 14:17:51.012769 containerd[1804]: time="2024-06-25T14:17:51.012693308Z" level=info msg="Forcibly stopping sandbox \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\"" Jun 25 14:17:51.260070 sshd[5085]: pam_unix(sshd:session): session closed for user core Jun 25 14:17:51.263000 audit[5085]: USER_END pid=5085 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:51.263000 audit[5085]: CRED_DISP pid=5085 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:51.266829 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 14:17:51.268008 systemd[1]: sshd@11-172.31.16.245:22-139.178.68.195:48742.service: Deactivated successfully. Jun 25 14:17:51.267000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.16.245:22-139.178.68.195:48742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:51.270327 systemd-logind[1794]: Session 12 logged out. Waiting for processes to exit. Jun 25 14:17:51.273200 systemd-logind[1794]: Removed session 12. Jun 25 14:17:51.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.16.245:22-139.178.68.195:48744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:51.300558 systemd[1]: Started sshd@12-172.31.16.245:22-139.178.68.195:48744.service - OpenSSH per-connection server daemon (139.178.68.195:48744). Jun 25 14:17:51.366181 containerd[1804]: 2024-06-25 14:17:51.224 [WARNING][5156] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"394daa7f-b0b9-4121-9800-b5db47ff9611", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"20a9c55ef9f12eb25dfd9cc4d79a646173cd15eb347f6e2d9dbf9c1896fb2465", Pod:"coredns-7db6d8ff4d-nf7mv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4fac297f01c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:51.366181 containerd[1804]: 2024-06-25 14:17:51.224 [INFO][5156] k8s.go 608: Cleaning up netns ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:51.366181 containerd[1804]: 2024-06-25 14:17:51.224 [INFO][5156] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" iface="eth0" netns="" Jun 25 14:17:51.366181 containerd[1804]: 2024-06-25 14:17:51.224 [INFO][5156] k8s.go 615: Releasing IP address(es) ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:51.366181 containerd[1804]: 2024-06-25 14:17:51.225 [INFO][5156] utils.go 188: Calico CNI releasing IP address ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:51.366181 containerd[1804]: 2024-06-25 14:17:51.327 [INFO][5162] ipam_plugin.go 411: Releasing address using handleID ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" HandleID="k8s-pod-network.398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:51.366181 containerd[1804]: 2024-06-25 14:17:51.327 [INFO][5162] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:51.366181 containerd[1804]: 2024-06-25 14:17:51.328 [INFO][5162] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:51.366181 containerd[1804]: 2024-06-25 14:17:51.357 [WARNING][5162] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" HandleID="k8s-pod-network.398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:51.366181 containerd[1804]: 2024-06-25 14:17:51.357 [INFO][5162] ipam_plugin.go 439: Releasing address using workloadID ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" HandleID="k8s-pod-network.398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--nf7mv-eth0" Jun 25 14:17:51.366181 containerd[1804]: 2024-06-25 14:17:51.361 [INFO][5162] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:51.366181 containerd[1804]: 2024-06-25 14:17:51.363 [INFO][5156] k8s.go 621: Teardown processing complete. ContainerID="398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e" Jun 25 14:17:51.367385 containerd[1804]: time="2024-06-25T14:17:51.367333114Z" level=info msg="TearDown network for sandbox \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\" successfully" Jun 25 14:17:51.372880 containerd[1804]: time="2024-06-25T14:17:51.372823165Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:17:51.373198 containerd[1804]: time="2024-06-25T14:17:51.373155845Z" level=info msg="RemovePodSandbox \"398e14762887bcd0debb12efede29174042497c8b460263ade04220534232c0e\" returns successfully" Jun 25 14:17:51.374156 containerd[1804]: time="2024-06-25T14:17:51.374094906Z" level=info msg="StopPodSandbox for \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\"" Jun 25 14:17:51.493000 audit[5168]: USER_ACCT pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:51.495765 sshd[5168]: Accepted publickey for core from 139.178.68.195 port 48744 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:17:51.496000 audit[5168]: CRED_ACQ pid=5168 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:51.496000 audit[5168]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff72a1350 a2=3 a3=1 items=0 ppid=1 pid=5168 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.496000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:17:51.498607 sshd[5168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:17:51.513445 systemd-logind[1794]: New session 13 of user core. Jun 25 14:17:51.518006 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 14:17:51.528000 audit[5168]: USER_START pid=5168 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:51.531000 audit[5196]: CRED_ACQ pid=5196 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:51.579141 containerd[1804]: 2024-06-25 14:17:51.479 [WARNING][5185] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7a3e59ff-5767-4d1a-8966-6d8e2be16aa6", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7", Pod:"coredns-7db6d8ff4d-sxhnd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2dcce3be896", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:51.579141 containerd[1804]: 2024-06-25 14:17:51.480 [INFO][5185] k8s.go 608: Cleaning up netns ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:51.579141 containerd[1804]: 2024-06-25 14:17:51.480 [INFO][5185] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" iface="eth0" netns="" Jun 25 14:17:51.579141 containerd[1804]: 2024-06-25 14:17:51.480 [INFO][5185] k8s.go 615: Releasing IP address(es) ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:51.579141 containerd[1804]: 2024-06-25 14:17:51.480 [INFO][5185] utils.go 188: Calico CNI releasing IP address ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:51.579141 containerd[1804]: 2024-06-25 14:17:51.558 [INFO][5191] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" HandleID="k8s-pod-network.4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:51.579141 containerd[1804]: 2024-06-25 14:17:51.559 [INFO][5191] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:51.579141 containerd[1804]: 2024-06-25 14:17:51.559 [INFO][5191] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:51.579141 containerd[1804]: 2024-06-25 14:17:51.572 [WARNING][5191] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" HandleID="k8s-pod-network.4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:51.579141 containerd[1804]: 2024-06-25 14:17:51.572 [INFO][5191] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" HandleID="k8s-pod-network.4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:51.579141 containerd[1804]: 2024-06-25 14:17:51.574 [INFO][5191] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:51.579141 containerd[1804]: 2024-06-25 14:17:51.576 [INFO][5185] k8s.go 621: Teardown processing complete. ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:51.580218 containerd[1804]: time="2024-06-25T14:17:51.579768100Z" level=info msg="TearDown network for sandbox \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\" successfully" Jun 25 14:17:51.580218 containerd[1804]: time="2024-06-25T14:17:51.579879365Z" level=info msg="StopPodSandbox for \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\" returns successfully" Jun 25 14:17:51.580829 containerd[1804]: time="2024-06-25T14:17:51.580784430Z" level=info msg="RemovePodSandbox for \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\"" Jun 25 14:17:51.581077 containerd[1804]: time="2024-06-25T14:17:51.581011953Z" level=info msg="Forcibly stopping sandbox \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\"" Jun 25 14:17:51.845098 containerd[1804]: 2024-06-25 14:17:51.712 [WARNING][5210] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7a3e59ff-5767-4d1a-8966-6d8e2be16aa6", ResourceVersion:"829", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"a1677a24b87fdba171a7a3f3735fec68da7a4b806ffe3580811d16ddb50b27e7", Pod:"coredns-7db6d8ff4d-sxhnd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.51.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2dcce3be896", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:17:51.845098 containerd[1804]: 2024-06-25 14:17:51.713 [INFO][5210] k8s.go 608: Cleaning up netns ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:51.845098 containerd[1804]: 2024-06-25 14:17:51.713 [INFO][5210] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" iface="eth0" netns="" Jun 25 14:17:51.845098 containerd[1804]: 2024-06-25 14:17:51.713 [INFO][5210] k8s.go 615: Releasing IP address(es) ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:51.845098 containerd[1804]: 2024-06-25 14:17:51.713 [INFO][5210] utils.go 188: Calico CNI releasing IP address ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:51.845098 containerd[1804]: 2024-06-25 14:17:51.804 [INFO][5222] ipam_plugin.go 411: Releasing address using handleID ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" HandleID="k8s-pod-network.4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:51.845098 containerd[1804]: 2024-06-25 14:17:51.806 [INFO][5222] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:17:51.845098 containerd[1804]: 2024-06-25 14:17:51.806 [INFO][5222] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:17:51.845098 containerd[1804]: 2024-06-25 14:17:51.829 [WARNING][5222] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" HandleID="k8s-pod-network.4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:51.845098 containerd[1804]: 2024-06-25 14:17:51.829 [INFO][5222] ipam_plugin.go 439: Releasing address using workloadID ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" HandleID="k8s-pod-network.4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Workload="ip--172--31--16--245-k8s-coredns--7db6d8ff4d--sxhnd-eth0" Jun 25 14:17:51.845098 containerd[1804]: 2024-06-25 14:17:51.838 [INFO][5222] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:17:51.845098 containerd[1804]: 2024-06-25 14:17:51.841 [INFO][5210] k8s.go 621: Teardown processing complete. ContainerID="4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7" Jun 25 14:17:51.846411 containerd[1804]: time="2024-06-25T14:17:51.845150856Z" level=info msg="TearDown network for sandbox \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\" successfully" Jun 25 14:17:51.849947 sshd[5168]: pam_unix(sshd:session): session closed for user core Jun 25 14:17:51.851000 audit[5168]: USER_END pid=5168 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:51.853162 containerd[1804]: time="2024-06-25T14:17:51.852637122Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:17:51.853162 containerd[1804]: time="2024-06-25T14:17:51.852809913Z" level=info msg="RemovePodSandbox \"4b782b5a22ed55c69bba468d7f7ed7521e5db1a024a8838324f936b84de482c7\" returns successfully" Jun 25 14:17:51.853000 audit[5168]: CRED_DISP pid=5168 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:51.857077 systemd-logind[1794]: Session 13 logged out. Waiting for processes to exit. Jun 25 14:17:51.857485 systemd[1]: sshd@12-172.31.16.245:22-139.178.68.195:48744.service: Deactivated successfully. Jun 25 14:17:51.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.16.245:22-139.178.68.195:48744 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:51.859097 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 14:17:51.861102 systemd-logind[1794]: Removed session 13. Jun 25 14:17:55.746242 systemd[1]: run-containerd-runc-k8s.io-4eba7e98af4fc55adef81bc45ca3b681d89902e80af2ccb59b0924dbcc60a8e6-runc.kIyOgp.mount: Deactivated successfully. Jun 25 14:17:56.890075 systemd[1]: Started sshd@13-172.31.16.245:22-139.178.68.195:48754.service - OpenSSH per-connection server daemon (139.178.68.195:48754). Jun 25 14:17:56.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.16.245:22-139.178.68.195:48754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:56.892821 kernel: kauditd_printk_skb: 23 callbacks suppressed Jun 25 14:17:56.892923 kernel: audit: type=1130 audit(1719325076.891:661): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.16.245:22-139.178.68.195:48754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:57.059000 audit[5258]: USER_ACCT pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:57.060084 sshd[5258]: Accepted publickey for core from 139.178.68.195 port 48754 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:17:57.064699 kernel: audit: type=1101 audit(1719325077.059:662): pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:57.065000 audit[5258]: CRED_ACQ pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:57.068542 sshd[5258]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:17:57.075450 kernel: audit: type=1103 audit(1719325077.065:663): pid=5258 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:57.075574 kernel: audit: type=1006 audit(1719325077.067:664): pid=5258 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jun 25 14:17:57.075625 kernel: audit: type=1300 audit(1719325077.067:664): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffce8ce8b0 a2=3 a3=1 items=0 ppid=1 pid=5258 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:57.067000 audit[5258]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffce8ce8b0 a2=3 a3=1 items=0 ppid=1 pid=5258 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:57.079538 systemd-logind[1794]: New session 14 of user core. Jun 25 14:17:57.087977 kernel: audit: type=1327 audit(1719325077.067:664): proctitle=737368643A20636F7265205B707269765D Jun 25 14:17:57.067000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:17:57.087016 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 14:17:57.096000 audit[5258]: USER_START pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:57.099000 audit[5260]: CRED_ACQ pid=5260 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:57.106768 kernel: audit: type=1105 audit(1719325077.096:665): pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:57.106841 kernel: audit: type=1103 audit(1719325077.099:666): pid=5260 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:57.338578 sshd[5258]: pam_unix(sshd:session): session closed for user core Jun 25 14:17:57.340000 audit[5258]: USER_END pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:57.344516 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 14:17:57.341000 audit[5258]: CRED_DISP pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:57.346206 systemd[1]: sshd@13-172.31.16.245:22-139.178.68.195:48754.service: Deactivated successfully. Jun 25 14:17:57.350995 kernel: audit: type=1106 audit(1719325077.340:667): pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:57.351148 kernel: audit: type=1104 audit(1719325077.341:668): pid=5258 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:57.351128 systemd-logind[1794]: Session 14 logged out. Waiting for processes to exit. Jun 25 14:17:57.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.16.245:22-139.178.68.195:48754 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:57.353150 systemd-logind[1794]: Removed session 14. Jun 25 14:18:02.381367 systemd[1]: Started sshd@14-172.31.16.245:22-139.178.68.195:53636.service - OpenSSH per-connection server daemon (139.178.68.195:53636). Jun 25 14:18:02.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.16.245:22-139.178.68.195:53636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:02.385688 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:18:02.385825 kernel: audit: type=1130 audit(1719325082.380:670): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.16.245:22-139.178.68.195:53636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:02.551000 audit[5281]: USER_ACCT pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:02.552601 sshd[5281]: Accepted publickey for core from 139.178.68.195 port 53636 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:02.557699 kernel: audit: type=1101 audit(1719325082.551:671): pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:02.558000 audit[5281]: CRED_ACQ pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:02.560060 sshd[5281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:02.566209 kernel: audit: type=1103 audit(1719325082.558:672): pid=5281 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:02.566441 kernel: audit: type=1006 audit(1719325082.558:673): pid=5281 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 14:18:02.558000 audit[5281]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3f89840 a2=3 a3=1 items=0 ppid=1 pid=5281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:02.571509 kernel: audit: type=1300 audit(1719325082.558:673): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3f89840 a2=3 a3=1 items=0 ppid=1 pid=5281 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:02.558000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:02.573891 kernel: audit: type=1327 audit(1719325082.558:673): proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:02.579755 systemd-logind[1794]: New session 15 of user core. Jun 25 14:18:02.585979 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 14:18:02.595000 audit[5281]: USER_START pid=5281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:02.598000 audit[5285]: CRED_ACQ pid=5285 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:02.605623 kernel: audit: type=1105 audit(1719325082.595:674): pid=5281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:02.605775 kernel: audit: type=1103 audit(1719325082.598:675): pid=5285 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:02.842440 sshd[5281]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:02.844000 audit[5281]: USER_END pid=5281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:02.848042 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 14:18:02.849262 systemd[1]: sshd@14-172.31.16.245:22-139.178.68.195:53636.service: Deactivated successfully. Jun 25 14:18:02.844000 audit[5281]: CRED_DISP pid=5281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:02.855617 kernel: audit: type=1106 audit(1719325082.844:676): pid=5281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:02.855767 kernel: audit: type=1104 audit(1719325082.844:677): pid=5281 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:02.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.16.245:22-139.178.68.195:53636 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:02.857055 systemd-logind[1794]: Session 15 logged out. Waiting for processes to exit. Jun 25 14:18:02.858604 systemd-logind[1794]: Removed session 15. Jun 25 14:18:03.040000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:18:03.040000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=400283a6a0 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:18:03.040000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:18:03.045000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:18:03.045000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:18:03.045000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=b a1=400283a6c0 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:18:03.045000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:18:03.045000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4002e84b00 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:18:03.045000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:18:03.048000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:18:03.048000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=400283aa00 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:18:03.048000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:18:07.893612 systemd[1]: Started sshd@15-172.31.16.245:22-139.178.68.195:53642.service - OpenSSH per-connection server daemon (139.178.68.195:53642). Jun 25 14:18:07.899695 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 14:18:07.899820 kernel: audit: type=1130 audit(1719325087.893:683): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.16.245:22-139.178.68.195:53642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:07.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.16.245:22-139.178.68.195:53642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:08.067000 audit[5297]: USER_ACCT pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:08.068130 sshd[5297]: Accepted publickey for core from 139.178.68.195 port 53642 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:08.071390 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:08.069000 audit[5297]: CRED_ACQ pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:08.077875 kernel: audit: type=1101 audit(1719325088.067:684): pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:08.077978 kernel: audit: type=1103 audit(1719325088.069:685): pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:08.081958 kernel: audit: type=1006 audit(1719325088.070:686): pid=5297 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 14:18:08.082052 kernel: audit: type=1300 audit(1719325088.070:686): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdcf968c0 a2=3 a3=1 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:08.070000 audit[5297]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdcf968c0 a2=3 a3=1 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:08.070000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:08.085776 systemd-logind[1794]: New session 16 of user core. Jun 25 14:18:08.089057 kernel: audit: type=1327 audit(1719325088.070:686): proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:08.087964 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 14:18:08.097000 audit[5297]: USER_START pid=5297 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:08.100000 audit[5304]: CRED_ACQ pid=5304 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:08.107260 kernel: audit: type=1105 audit(1719325088.097:687): pid=5297 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:08.107343 kernel: audit: type=1103 audit(1719325088.100:688): pid=5304 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:08.367501 sshd[5297]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:08.369000 audit[5297]: USER_END pid=5297 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:08.373774 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 14:18:08.370000 audit[5297]: CRED_DISP pid=5297 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:08.375143 systemd[1]: sshd@15-172.31.16.245:22-139.178.68.195:53642.service: Deactivated successfully. Jun 25 14:18:08.380106 kernel: audit: type=1106 audit(1719325088.369:689): pid=5297 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:08.380227 kernel: audit: type=1104 audit(1719325088.370:690): pid=5297 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:08.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.16.245:22-139.178.68.195:53642 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:08.380801 systemd-logind[1794]: Session 16 logged out. Waiting for processes to exit. Jun 25 14:18:08.382927 systemd-logind[1794]: Removed session 16. Jun 25 14:18:12.659775 systemd[1]: run-containerd-runc-k8s.io-6df28bc0795455b3ee72364f0effa6f09696aa8853d64e61d13f40e256785701-runc.n1bkdc.mount: Deactivated successfully. Jun 25 14:18:13.411721 systemd[1]: Started sshd@16-172.31.16.245:22-139.178.68.195:56846.service - OpenSSH per-connection server daemon (139.178.68.195:56846). Jun 25 14:18:13.417744 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:18:13.417846 kernel: audit: type=1130 audit(1719325093.411:692): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.16.245:22-139.178.68.195:56846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:13.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.16.245:22-139.178.68.195:56846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:13.588000 audit[5338]: USER_ACCT pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:13.592387 sshd[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:13.593293 sshd[5338]: Accepted publickey for core from 139.178.68.195 port 56846 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:13.588000 audit[5338]: CRED_ACQ pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:13.597741 kernel: audit: type=1101 audit(1719325093.588:693): pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:13.597927 kernel: audit: type=1103 audit(1719325093.588:694): pid=5338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:13.600884 kernel: audit: type=1006 audit(1719325093.588:695): pid=5338 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Jun 25 14:18:13.588000 audit[5338]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeea4a570 a2=3 a3=1 items=0 ppid=1 pid=5338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:13.605825 kernel: audit: type=1300 audit(1719325093.588:695): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeea4a570 a2=3 a3=1 items=0 ppid=1 pid=5338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:13.588000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:13.607766 kernel: audit: type=1327 audit(1719325093.588:695): proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:13.612134 systemd-logind[1794]: New session 17 of user core. Jun 25 14:18:13.619960 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 14:18:13.628000 audit[5338]: USER_START pid=5338 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:13.631000 audit[5340]: CRED_ACQ pid=5340 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:13.638595 kernel: audit: type=1105 audit(1719325093.628:696): pid=5338 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:13.638732 kernel: audit: type=1103 audit(1719325093.631:697): pid=5340 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:13.886951 sshd[5338]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:13.889000 audit[5338]: USER_END pid=5338 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:13.893096 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 14:18:13.894777 systemd[1]: sshd@16-172.31.16.245:22-139.178.68.195:56846.service: Deactivated successfully. Jun 25 14:18:13.889000 audit[5338]: CRED_DISP pid=5338 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:13.900626 kernel: audit: type=1106 audit(1719325093.889:698): pid=5338 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:13.900780 kernel: audit: type=1104 audit(1719325093.889:699): pid=5338 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:13.900942 systemd-logind[1794]: Session 17 logged out. Waiting for processes to exit. Jun 25 14:18:13.894000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.16.245:22-139.178.68.195:56846 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:13.902844 systemd-logind[1794]: Removed session 17. Jun 25 14:18:13.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.16.245:22-139.178.68.195:56858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:13.931895 systemd[1]: Started sshd@17-172.31.16.245:22-139.178.68.195:56858.service - OpenSSH per-connection server daemon (139.178.68.195:56858). Jun 25 14:18:14.102000 audit[5350]: USER_ACCT pid=5350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:14.103895 sshd[5350]: Accepted publickey for core from 139.178.68.195 port 56858 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:14.105000 audit[5350]: CRED_ACQ pid=5350 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:14.105000 audit[5350]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff95e1a10 a2=3 a3=1 items=0 ppid=1 pid=5350 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:14.105000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:14.107332 sshd[5350]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:14.116294 systemd-logind[1794]: New session 18 of user core. Jun 25 14:18:14.119968 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 14:18:14.130000 audit[5350]: USER_START pid=5350 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:14.133000 audit[5352]: CRED_ACQ pid=5352 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:14.629813 sshd[5350]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:14.631000 audit[5350]: USER_END pid=5350 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:14.631000 audit[5350]: CRED_DISP pid=5350 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:14.636171 systemd[1]: sshd@17-172.31.16.245:22-139.178.68.195:56858.service: Deactivated successfully. Jun 25 14:18:14.637575 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 14:18:14.638042 systemd-logind[1794]: Session 18 logged out. Waiting for processes to exit. Jun 25 14:18:14.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.16.245:22-139.178.68.195:56858 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:14.641366 systemd-logind[1794]: Removed session 18. Jun 25 14:18:14.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.16.245:22-139.178.68.195:56868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:14.672189 systemd[1]: Started sshd@18-172.31.16.245:22-139.178.68.195:56868.service - OpenSSH per-connection server daemon (139.178.68.195:56868). Jun 25 14:18:14.846000 audit[5360]: USER_ACCT pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:14.847359 sshd[5360]: Accepted publickey for core from 139.178.68.195 port 56868 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:14.850383 sshd[5360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:14.848000 audit[5360]: CRED_ACQ pid=5360 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:14.848000 audit[5360]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5137710 a2=3 a3=1 items=0 ppid=1 pid=5360 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:14.848000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:14.861759 systemd-logind[1794]: New session 19 of user core. Jun 25 14:18:14.863975 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 14:18:14.874000 audit[5360]: USER_START pid=5360 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:14.878000 audit[5362]: CRED_ACQ pid=5362 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.161000 audit[5380]: NETFILTER_CFG table=filter:113 family=2 entries=20 op=nft_register_rule pid=5380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:18.161000 audit[5380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffe3154660 a2=0 a3=1 items=0 ppid=3185 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:18.161000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:18.163000 audit[5380]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=5380 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:18.163000 audit[5380]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe3154660 a2=0 a3=1 items=0 ppid=3185 pid=5380 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:18.163000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:18.177717 sshd[5360]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:18.181000 audit[5360]: USER_END pid=5360 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.181000 audit[5360]: CRED_DISP pid=5360 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.16.245:22-139.178.68.195:56868 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:18.186117 systemd[1]: sshd@18-172.31.16.245:22-139.178.68.195:56868.service: Deactivated successfully. Jun 25 14:18:18.187454 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 14:18:18.187769 systemd-logind[1794]: Session 19 logged out. Waiting for processes to exit. Jun 25 14:18:18.190063 systemd[1]: session-19.scope: Consumed 1.012s CPU time. Jun 25 14:18:18.191535 systemd-logind[1794]: Removed session 19. Jun 25 14:18:18.216000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.16.245:22-139.178.68.195:44766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:18.216435 systemd[1]: Started sshd@19-172.31.16.245:22-139.178.68.195:44766.service - OpenSSH per-connection server daemon (139.178.68.195:44766). Jun 25 14:18:18.407000 audit[5384]: USER_ACCT pid=5384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.408100 sshd[5384]: Accepted publickey for core from 139.178.68.195 port 44766 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:18.409000 audit[5384]: CRED_ACQ pid=5384 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.409000 audit[5384]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffecfae2f0 a2=3 a3=1 items=0 ppid=1 pid=5384 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:18.409000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:18.411133 sshd[5384]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:18.422219 systemd-logind[1794]: New session 20 of user core. Jun 25 14:18:18.425998 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 14:18:18.442356 kernel: kauditd_printk_skb: 35 callbacks suppressed Jun 25 14:18:18.442525 kernel: audit: type=1105 audit(1719325098.438:725): pid=5384 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.438000 audit[5384]: USER_START pid=5384 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.445000 audit[5387]: CRED_ACQ pid=5387 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.453716 kernel: audit: type=1103 audit(1719325098.445:726): pid=5387 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.455000 audit[5386]: NETFILTER_CFG table=filter:115 family=2 entries=32 op=nft_register_rule pid=5386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:18.455000 audit[5386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffcf318450 a2=0 a3=1 items=0 ppid=3185 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:18.464305 kernel: audit: type=1325 audit(1719325098.455:727): table=filter:115 family=2 entries=32 op=nft_register_rule pid=5386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:18.464531 kernel: audit: type=1300 audit(1719325098.455:727): arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffcf318450 a2=0 a3=1 items=0 ppid=3185 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:18.455000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:18.472392 kernel: audit: type=1327 audit(1719325098.455:727): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:18.472526 kernel: audit: type=1325 audit(1719325098.455:728): table=nat:116 family=2 entries=20 op=nft_register_rule pid=5386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:18.455000 audit[5386]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=5386 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:18.455000 audit[5386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcf318450 a2=0 a3=1 items=0 ppid=3185 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:18.480333 kernel: audit: type=1300 audit(1719325098.455:728): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcf318450 a2=0 a3=1 items=0 ppid=3185 pid=5386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:18.455000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:18.483256 kernel: audit: type=1327 audit(1719325098.455:728): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:18.976903 sshd[5384]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:18.978000 audit[5384]: USER_END pid=5384 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.984978 systemd[1]: sshd@19-172.31.16.245:22-139.178.68.195:44766.service: Deactivated successfully. Jun 25 14:18:18.986368 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 14:18:18.978000 audit[5384]: CRED_DISP pid=5384 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.991559 kernel: audit: type=1106 audit(1719325098.978:729): pid=5384 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.991822 kernel: audit: type=1104 audit(1719325098.978:730): pid=5384 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.992353 systemd-logind[1794]: Session 20 logged out. Waiting for processes to exit. Jun 25 14:18:18.994187 systemd-logind[1794]: Removed session 20. Jun 25 14:18:18.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.16.245:22-139.178.68.195:44766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:19.010572 systemd[1]: Started sshd@20-172.31.16.245:22-139.178.68.195:44768.service - OpenSSH per-connection server daemon (139.178.68.195:44768). Jun 25 14:18:19.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.16.245:22-139.178.68.195:44768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:19.186214 sshd[5400]: Accepted publickey for core from 139.178.68.195 port 44768 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:19.185000 audit[5400]: USER_ACCT pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:19.187000 audit[5400]: CRED_ACQ pid=5400 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:19.187000 audit[5400]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdab94e80 a2=3 a3=1 items=0 ppid=1 pid=5400 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:19.187000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:19.189143 sshd[5400]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:19.198859 systemd-logind[1794]: New session 21 of user core. Jun 25 14:18:19.203054 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 14:18:19.213000 audit[5400]: USER_START pid=5400 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:19.216000 audit[5402]: CRED_ACQ pid=5402 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:19.467284 sshd[5400]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:19.469000 audit[5400]: USER_END pid=5400 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:19.469000 audit[5400]: CRED_DISP pid=5400 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:19.473775 systemd[1]: sshd@20-172.31.16.245:22-139.178.68.195:44768.service: Deactivated successfully. Jun 25 14:18:19.475306 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 14:18:19.473000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.16.245:22-139.178.68.195:44768 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:19.477851 systemd-logind[1794]: Session 21 logged out. Waiting for processes to exit. Jun 25 14:18:19.480605 systemd-logind[1794]: Removed session 21. Jun 25 14:18:21.761000 audit[5412]: NETFILTER_CFG table=filter:117 family=2 entries=33 op=nft_register_rule pid=5412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:21.761000 audit[5412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12604 a0=3 a1=ffffe1753d30 a2=0 a3=1 items=0 ppid=3185 pid=5412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:21.761000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:21.773420 kubelet[3044]: I0625 14:18:21.773347 3044 topology_manager.go:215] "Topology Admit Handler" podUID="91d7f306-482a-4abb-9612-ff226e83c664" podNamespace="calico-apiserver" podName="calico-apiserver-78bf68cf76-vjsgg" Jun 25 14:18:21.788075 systemd[1]: Created slice kubepods-besteffort-pod91d7f306_482a_4abb_9612_ff226e83c664.slice - libcontainer container kubepods-besteffort-pod91d7f306_482a_4abb_9612_ff226e83c664.slice. Jun 25 14:18:21.795034 kubelet[3044]: W0625 14:18:21.794974 3044 reflector.go:547] object-"calico-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-245" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-16-245' and this object Jun 25 14:18:21.795192 kubelet[3044]: E0625 14:18:21.795042 3044 reflector.go:150] object-"calico-apiserver"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-245" cannot list resource "configmaps" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-16-245' and this object Jun 25 14:18:21.795192 kubelet[3044]: W0625 14:18:21.795183 3044 reflector.go:547] object-"calico-apiserver"/"calico-apiserver-certs": failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-16-245" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-16-245' and this object Jun 25 14:18:21.795358 kubelet[3044]: E0625 14:18:21.795254 3044 reflector.go:150] object-"calico-apiserver"/"calico-apiserver-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "calico-apiserver-certs" is forbidden: User "system:node:ip-172-31-16-245" cannot list resource "secrets" in API group "" in the namespace "calico-apiserver": no relationship found between node 'ip-172-31-16-245' and this object Jun 25 14:18:21.764000 audit[5412]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=5412 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:21.764000 audit[5412]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe1753d30 a2=0 a3=1 items=0 ppid=3185 pid=5412 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:21.764000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:21.898000 audit[5414]: NETFILTER_CFG table=filter:119 family=2 entries=34 op=nft_register_rule pid=5414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:21.898000 audit[5414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12604 a0=3 a1=ffffe9dbb440 a2=0 a3=1 items=0 ppid=3185 pid=5414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:21.898000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:21.902000 audit[5414]: NETFILTER_CFG table=nat:120 family=2 entries=20 op=nft_register_rule pid=5414 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:21.902000 audit[5414]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe9dbb440 a2=0 a3=1 items=0 ppid=3185 pid=5414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:21.902000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:21.914364 kubelet[3044]: I0625 14:18:21.914290 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/91d7f306-482a-4abb-9612-ff226e83c664-calico-apiserver-certs\") pod \"calico-apiserver-78bf68cf76-vjsgg\" (UID: \"91d7f306-482a-4abb-9612-ff226e83c664\") " pod="calico-apiserver/calico-apiserver-78bf68cf76-vjsgg" Jun 25 14:18:21.914680 kubelet[3044]: I0625 14:18:21.914632 3044 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tf7g\" (UniqueName: \"kubernetes.io/projected/91d7f306-482a-4abb-9612-ff226e83c664-kube-api-access-8tf7g\") pod \"calico-apiserver-78bf68cf76-vjsgg\" (UID: \"91d7f306-482a-4abb-9612-ff226e83c664\") " pod="calico-apiserver/calico-apiserver-78bf68cf76-vjsgg" Jun 25 14:18:23.016332 kubelet[3044]: E0625 14:18:23.016274 3044 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: failed to sync secret cache: timed out waiting for the condition Jun 25 14:18:23.016916 kubelet[3044]: E0625 14:18:23.016410 3044 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91d7f306-482a-4abb-9612-ff226e83c664-calico-apiserver-certs podName:91d7f306-482a-4abb-9612-ff226e83c664 nodeName:}" failed. No retries permitted until 2024-06-25 14:18:23.516378491 +0000 UTC m=+94.054554113 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/91d7f306-482a-4abb-9612-ff226e83c664-calico-apiserver-certs") pod "calico-apiserver-78bf68cf76-vjsgg" (UID: "91d7f306-482a-4abb-9612-ff226e83c664") : failed to sync secret cache: timed out waiting for the condition Jun 25 14:18:23.030683 kubelet[3044]: E0625 14:18:23.030607 3044 projected.go:294] Couldn't get configMap calico-apiserver/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jun 25 14:18:23.030683 kubelet[3044]: E0625 14:18:23.030684 3044 projected.go:200] Error preparing data for projected volume kube-api-access-8tf7g for pod calico-apiserver/calico-apiserver-78bf68cf76-vjsgg: failed to sync configmap cache: timed out waiting for the condition Jun 25 14:18:23.030901 kubelet[3044]: E0625 14:18:23.030775 3044 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/91d7f306-482a-4abb-9612-ff226e83c664-kube-api-access-8tf7g podName:91d7f306-482a-4abb-9612-ff226e83c664 nodeName:}" failed. No retries permitted until 2024-06-25 14:18:23.530748644 +0000 UTC m=+94.068924254 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-8tf7g" (UniqueName: "kubernetes.io/projected/91d7f306-482a-4abb-9612-ff226e83c664-kube-api-access-8tf7g") pod "calico-apiserver-78bf68cf76-vjsgg" (UID: "91d7f306-482a-4abb-9612-ff226e83c664") : failed to sync configmap cache: timed out waiting for the condition Jun 25 14:18:23.902969 containerd[1804]: time="2024-06-25T14:18:23.902893319Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78bf68cf76-vjsgg,Uid:91d7f306-482a-4abb-9612-ff226e83c664,Namespace:calico-apiserver,Attempt:0,}" Jun 25 14:18:24.124035 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:18:24.124164 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calid60c6a3b33c: link becomes ready Jun 25 14:18:24.123760 systemd-networkd[1524]: calid60c6a3b33c: Link UP Jun 25 14:18:24.124138 systemd-networkd[1524]: calid60c6a3b33c: Gained carrier Jun 25 14:18:24.131419 (udev-worker)[5437]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:23.992 [INFO][5420] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-eth0 calico-apiserver-78bf68cf76- calico-apiserver 91d7f306-482a-4abb-9612-ff226e83c664 1098 0 2024-06-25 14:18:21 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:78bf68cf76 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-245 calico-apiserver-78bf68cf76-vjsgg eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calid60c6a3b33c [] []}} ContainerID="dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" Namespace="calico-apiserver" Pod="calico-apiserver-78bf68cf76-vjsgg" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-" Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:23.995 [INFO][5420] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" Namespace="calico-apiserver" Pod="calico-apiserver-78bf68cf76-vjsgg" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-eth0" Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.047 [INFO][5431] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" HandleID="k8s-pod-network.dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" Workload="ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-eth0" Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.065 [INFO][5431] ipam_plugin.go 264: Auto assigning IP ContainerID="dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" HandleID="k8s-pod-network.dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" Workload="ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3e00), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-245", "pod":"calico-apiserver-78bf68cf76-vjsgg", "timestamp":"2024-06-25 14:18:24.04722879 +0000 UTC"}, Hostname:"ip-172-31-16-245", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.065 [INFO][5431] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.065 [INFO][5431] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.065 [INFO][5431] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-245' Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.070 [INFO][5431] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" host="ip-172-31-16-245" Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.082 [INFO][5431] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-245" Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.090 [INFO][5431] ipam.go 489: Trying affinity for 192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.093 [INFO][5431] ipam.go 155: Attempting to load block cidr=192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.098 [INFO][5431] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.51.192/26 host="ip-172-31-16-245" Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.098 [INFO][5431] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.51.192/26 handle="k8s-pod-network.dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" host="ip-172-31-16-245" Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.100 [INFO][5431] ipam.go 1685: Creating new handle: k8s-pod-network.dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2 Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.106 [INFO][5431] ipam.go 1203: Writing block in order to claim IPs block=192.168.51.192/26 handle="k8s-pod-network.dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" host="ip-172-31-16-245" Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.115 [INFO][5431] ipam.go 1216: Successfully claimed IPs: [192.168.51.197/26] block=192.168.51.192/26 handle="k8s-pod-network.dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" host="ip-172-31-16-245" Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.115 [INFO][5431] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.51.197/26] handle="k8s-pod-network.dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" host="ip-172-31-16-245" Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.115 [INFO][5431] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:18:24.165353 containerd[1804]: 2024-06-25 14:18:24.115 [INFO][5431] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.51.197/26] IPv6=[] ContainerID="dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" HandleID="k8s-pod-network.dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" Workload="ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-eth0" Jun 25 14:18:24.166832 containerd[1804]: 2024-06-25 14:18:24.118 [INFO][5420] k8s.go 386: Populated endpoint ContainerID="dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" Namespace="calico-apiserver" Pod="calico-apiserver-78bf68cf76-vjsgg" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-eth0", GenerateName:"calico-apiserver-78bf68cf76-", Namespace:"calico-apiserver", SelfLink:"", UID:"91d7f306-482a-4abb-9612-ff226e83c664", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 18, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78bf68cf76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"", Pod:"calico-apiserver-78bf68cf76-vjsgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid60c6a3b33c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:24.166832 containerd[1804]: 2024-06-25 14:18:24.118 [INFO][5420] k8s.go 387: Calico CNI using IPs: [192.168.51.197/32] ContainerID="dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" Namespace="calico-apiserver" Pod="calico-apiserver-78bf68cf76-vjsgg" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-eth0" Jun 25 14:18:24.166832 containerd[1804]: 2024-06-25 14:18:24.119 [INFO][5420] dataplane_linux.go 68: Setting the host side veth name to calid60c6a3b33c ContainerID="dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" Namespace="calico-apiserver" Pod="calico-apiserver-78bf68cf76-vjsgg" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-eth0" Jun 25 14:18:24.166832 containerd[1804]: 2024-06-25 14:18:24.125 [INFO][5420] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" Namespace="calico-apiserver" Pod="calico-apiserver-78bf68cf76-vjsgg" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-eth0" Jun 25 14:18:24.166832 containerd[1804]: 2024-06-25 14:18:24.134 [INFO][5420] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" Namespace="calico-apiserver" Pod="calico-apiserver-78bf68cf76-vjsgg" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-eth0", GenerateName:"calico-apiserver-78bf68cf76-", Namespace:"calico-apiserver", SelfLink:"", UID:"91d7f306-482a-4abb-9612-ff226e83c664", ResourceVersion:"1098", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 18, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"78bf68cf76", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-245", ContainerID:"dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2", Pod:"calico-apiserver-78bf68cf76-vjsgg", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.51.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calid60c6a3b33c", MAC:"b2:2c:c0:7e:77:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:24.166832 containerd[1804]: 2024-06-25 14:18:24.157 [INFO][5420] k8s.go 500: Wrote updated endpoint to datastore ContainerID="dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2" Namespace="calico-apiserver" Pod="calico-apiserver-78bf68cf76-vjsgg" WorkloadEndpoint="ip--172--31--16--245-k8s-calico--apiserver--78bf68cf76--vjsgg-eth0" Jun 25 14:18:24.208988 containerd[1804]: time="2024-06-25T14:18:24.208840319Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:18:24.209355 containerd[1804]: time="2024-06-25T14:18:24.209299886Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:24.209576 containerd[1804]: time="2024-06-25T14:18:24.209508747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:18:24.209822 containerd[1804]: time="2024-06-25T14:18:24.209747189Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:24.249000 audit[5470]: NETFILTER_CFG table=filter:121 family=2 entries=55 op=nft_register_chain pid=5470 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:24.251991 kernel: kauditd_printk_skb: 24 callbacks suppressed Jun 25 14:18:24.252087 kernel: audit: type=1325 audit(1719325104.249:745): table=filter:121 family=2 entries=55 op=nft_register_chain pid=5470 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:24.249000 audit[5470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27464 a0=3 a1=ffffc5ef9710 a2=0 a3=ffffbd6dffa8 items=0 ppid=4077 pid=5470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:24.260123 kernel: audit: type=1300 audit(1719325104.249:745): arch=c00000b7 syscall=211 success=yes exit=27464 a0=3 a1=ffffc5ef9710 a2=0 a3=ffffbd6dffa8 items=0 ppid=4077 pid=5470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:24.249000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:24.263606 kernel: audit: type=1327 audit(1719325104.249:745): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:24.290632 systemd[1]: Started cri-containerd-dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2.scope - libcontainer container dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2. Jun 25 14:18:24.443000 audit: BPF prog-id=179 op=LOAD Jun 25 14:18:24.445822 kernel: audit: type=1334 audit(1719325104.443:746): prog-id=179 op=LOAD Jun 25 14:18:24.445000 audit: BPF prog-id=180 op=LOAD Jun 25 14:18:24.445000 audit[5469]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=5459 pid=5469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:24.453880 kernel: audit: type=1334 audit(1719325104.445:747): prog-id=180 op=LOAD Jun 25 14:18:24.454021 kernel: audit: type=1300 audit(1719325104.445:747): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=5459 pid=5469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:24.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464313837306237376638346636306165643566366134376130323335 Jun 25 14:18:24.459059 kernel: audit: type=1327 audit(1719325104.445:747): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464313837306237376638346636306165643566366134376130323335 Jun 25 14:18:24.445000 audit: BPF prog-id=181 op=LOAD Jun 25 14:18:24.460560 kernel: audit: type=1334 audit(1719325104.445:748): prog-id=181 op=LOAD Jun 25 14:18:24.445000 audit[5469]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=5459 pid=5469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:24.466887 kernel: audit: type=1300 audit(1719325104.445:748): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=5459 pid=5469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:24.445000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464313837306237376638346636306165643566366134376130323335 Jun 25 14:18:24.473135 kernel: audit: type=1327 audit(1719325104.445:748): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464313837306237376638346636306165643566366134376130323335 Jun 25 14:18:24.446000 audit: BPF prog-id=181 op=UNLOAD Jun 25 14:18:24.446000 audit: BPF prog-id=180 op=UNLOAD Jun 25 14:18:24.446000 audit: BPF prog-id=182 op=LOAD Jun 25 14:18:24.446000 audit[5469]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a9b10 a2=78 a3=0 items=0 ppid=5459 pid=5469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:24.446000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464313837306237376638346636306165643566366134376130323335 Jun 25 14:18:24.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.16.245:22-139.178.68.195:44772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:24.507404 systemd[1]: Started sshd@21-172.31.16.245:22-139.178.68.195:44772.service - OpenSSH per-connection server daemon (139.178.68.195:44772). Jun 25 14:18:24.548539 containerd[1804]: time="2024-06-25T14:18:24.548483480Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-78bf68cf76-vjsgg,Uid:91d7f306-482a-4abb-9612-ff226e83c664,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2\"" Jun 25 14:18:24.553960 containerd[1804]: time="2024-06-25T14:18:24.553892459Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 14:18:24.703000 audit[5488]: USER_ACCT pid=5488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.704082 sshd[5488]: Accepted publickey for core from 139.178.68.195 port 44772 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:24.705000 audit[5488]: CRED_ACQ pid=5488 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.705000 audit[5488]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcb2c1540 a2=3 a3=1 items=0 ppid=1 pid=5488 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:24.705000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:24.707409 sshd[5488]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:24.714760 systemd-logind[1794]: New session 22 of user core. Jun 25 14:18:24.720045 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 14:18:24.731000 audit[5488]: USER_START pid=5488 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.734000 audit[5496]: CRED_ACQ pid=5496 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.973589 sshd[5488]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:24.976000 audit[5488]: USER_END pid=5488 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.977000 audit[5488]: CRED_DISP pid=5488 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.982741 systemd-logind[1794]: Session 22 logged out. Waiting for processes to exit. Jun 25 14:18:24.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.16.245:22-139.178.68.195:44772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:24.983132 systemd[1]: sshd@21-172.31.16.245:22-139.178.68.195:44772.service: Deactivated successfully. Jun 25 14:18:24.984432 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 14:18:24.986145 systemd-logind[1794]: Removed session 22. Jun 25 14:18:25.575835 systemd-networkd[1524]: calid60c6a3b33c: Gained IPv6LL Jun 25 14:18:27.236000 audit[5530]: NETFILTER_CFG table=filter:122 family=2 entries=22 op=nft_register_rule pid=5530 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:27.236000 audit[5530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffffeaed7d0 a2=0 a3=1 items=0 ppid=3185 pid=5530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:27.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:27.245000 audit[5530]: NETFILTER_CFG table=nat:123 family=2 entries=104 op=nft_register_chain pid=5530 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:27.245000 audit[5530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=fffffeaed7d0 a2=0 a3=1 items=0 ppid=3185 pid=5530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:27.245000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:27.310607 containerd[1804]: time="2024-06-25T14:18:27.310552641Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:27.314057 containerd[1804]: time="2024-06-25T14:18:27.314005245Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jun 25 14:18:27.315874 containerd[1804]: time="2024-06-25T14:18:27.315830734Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:27.321424 containerd[1804]: time="2024-06-25T14:18:27.320616895Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:27.325058 containerd[1804]: time="2024-06-25T14:18:27.325001713Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:27.329053 containerd[1804]: time="2024-06-25T14:18:27.328983977Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 2.774667999s" Jun 25 14:18:27.329427 containerd[1804]: time="2024-06-25T14:18:27.329353471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 14:18:27.335969 containerd[1804]: time="2024-06-25T14:18:27.335885585Z" level=info msg="CreateContainer within sandbox \"dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 14:18:27.364388 containerd[1804]: time="2024-06-25T14:18:27.364324677Z" level=info msg="CreateContainer within sandbox \"dd1870b77f84f60aed5f6a47a0235babe19c8191148cb9958114cda7cb166cc2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f27ca92ebbf183880935102c7e6ae69a8a2cc834231802b7573be2ea99118160\"" Jun 25 14:18:27.367520 containerd[1804]: time="2024-06-25T14:18:27.365969457Z" level=info msg="StartContainer for \"f27ca92ebbf183880935102c7e6ae69a8a2cc834231802b7573be2ea99118160\"" Jun 25 14:18:27.436715 systemd[1]: run-containerd-runc-k8s.io-f27ca92ebbf183880935102c7e6ae69a8a2cc834231802b7573be2ea99118160-runc.DEyPcr.mount: Deactivated successfully. Jun 25 14:18:27.448978 systemd[1]: Started cri-containerd-f27ca92ebbf183880935102c7e6ae69a8a2cc834231802b7573be2ea99118160.scope - libcontainer container f27ca92ebbf183880935102c7e6ae69a8a2cc834231802b7573be2ea99118160. Jun 25 14:18:27.469000 audit: BPF prog-id=183 op=LOAD Jun 25 14:18:27.471000 audit: BPF prog-id=184 op=LOAD Jun 25 14:18:27.471000 audit[5546]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=40001398b0 a2=78 a3=0 items=0 ppid=5459 pid=5546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:27.471000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632376361393265626266313833383830393335313032633765366165 Jun 25 14:18:27.472000 audit: BPF prog-id=185 op=LOAD Jun 25 14:18:27.472000 audit[5546]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=19 a0=5 a1=4000139640 a2=78 a3=0 items=0 ppid=5459 pid=5546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:27.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632376361393265626266313833383830393335313032633765366165 Jun 25 14:18:27.473000 audit: BPF prog-id=185 op=UNLOAD Jun 25 14:18:27.473000 audit: BPF prog-id=184 op=UNLOAD Jun 25 14:18:27.473000 audit: BPF prog-id=186 op=LOAD Jun 25 14:18:27.473000 audit[5546]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000139b10 a2=78 a3=0 items=0 ppid=5459 pid=5546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:27.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6632376361393265626266313833383830393335313032633765366165 Jun 25 14:18:27.526200 containerd[1804]: time="2024-06-25T14:18:27.526013640Z" level=info msg="StartContainer for \"f27ca92ebbf183880935102c7e6ae69a8a2cc834231802b7573be2ea99118160\" returns successfully" Jun 25 14:18:28.278711 kubelet[3044]: I0625 14:18:28.278591 3044 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-78bf68cf76-vjsgg" podStartSLOduration=4.498679163 podStartE2EDuration="7.278568028s" podCreationTimestamp="2024-06-25 14:18:21 +0000 UTC" firstStartedPulling="2024-06-25 14:18:24.55107401 +0000 UTC m=+95.089249620" lastFinishedPulling="2024-06-25 14:18:27.330962851 +0000 UTC m=+97.869138485" observedRunningTime="2024-06-25 14:18:28.276128723 +0000 UTC m=+98.814304369" watchObservedRunningTime="2024-06-25 14:18:28.278568028 +0000 UTC m=+98.816743626" Jun 25 14:18:28.322000 audit[5578]: NETFILTER_CFG table=filter:124 family=2 entries=10 op=nft_register_rule pid=5578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:28.322000 audit[5578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffeb204a50 a2=0 a3=1 items=0 ppid=3185 pid=5578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:28.322000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:28.326000 audit[5578]: NETFILTER_CFG table=nat:125 family=2 entries=44 op=nft_register_rule pid=5578 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:28.326000 audit[5578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14988 a0=3 a1=ffffeb204a50 a2=0 a3=1 items=0 ppid=3185 pid=5578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:28.326000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:29.132000 audit[5580]: NETFILTER_CFG table=filter:126 family=2 entries=10 op=nft_register_rule pid=5580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:29.132000 audit[5580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffc5ed5600 a2=0 a3=1 items=0 ppid=3185 pid=5580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:29.132000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:29.135000 audit[5580]: NETFILTER_CFG table=nat:127 family=2 entries=44 op=nft_register_rule pid=5580 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:29.135000 audit[5580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14988 a0=3 a1=ffffc5ed5600 a2=0 a3=1 items=0 ppid=3185 pid=5580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:29.135000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:29.331000 audit[5589]: NETFILTER_CFG table=filter:128 family=2 entries=9 op=nft_register_rule pid=5589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:29.333600 kernel: kauditd_printk_skb: 46 callbacks suppressed Jun 25 14:18:29.333775 kernel: audit: type=1325 audit(1719325109.331:773): table=filter:128 family=2 entries=9 op=nft_register_rule pid=5589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:29.331000 audit[5589]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffff2301e0 a2=0 a3=1 items=0 ppid=3185 pid=5589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:29.341820 kernel: audit: type=1300 audit(1719325109.331:773): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffff2301e0 a2=0 a3=1 items=0 ppid=3185 pid=5589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:29.331000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:29.344596 kernel: audit: type=1327 audit(1719325109.331:773): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:29.347000 audit[5589]: NETFILTER_CFG table=nat:129 family=2 entries=51 op=nft_register_chain pid=5589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:29.347000 audit[5589]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18564 a0=3 a1=ffffff2301e0 a2=0 a3=1 items=0 ppid=3185 pid=5589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:29.357201 kernel: audit: type=1325 audit(1719325109.347:774): table=nat:129 family=2 entries=51 op=nft_register_chain pid=5589 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:29.357267 kernel: audit: type=1300 audit(1719325109.347:774): arch=c00000b7 syscall=211 success=yes exit=18564 a0=3 a1=ffffff2301e0 a2=0 a3=1 items=0 ppid=3185 pid=5589 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:29.347000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:29.359944 kernel: audit: type=1327 audit(1719325109.347:774): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:30.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.16.245:22-139.178.68.195:34668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:30.017956 systemd[1]: Started sshd@22-172.31.16.245:22-139.178.68.195:34668.service - OpenSSH per-connection server daemon (139.178.68.195:34668). Jun 25 14:18:30.023714 kernel: audit: type=1130 audit(1719325110.018:775): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.16.245:22-139.178.68.195:34668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:30.193000 audit[5591]: USER_ACCT pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.198036 sshd[5591]: Accepted publickey for core from 139.178.68.195 port 34668 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:30.199727 kernel: audit: type=1101 audit(1719325110.193:776): pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.199000 audit[5591]: CRED_ACQ pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.202762 sshd[5591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:30.208375 kernel: audit: type=1103 audit(1719325110.199:777): pid=5591 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.208509 kernel: audit: type=1006 audit(1719325110.199:778): pid=5591 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 14:18:30.199000 audit[5591]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdf6d5ed0 a2=3 a3=1 items=0 ppid=1 pid=5591 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:30.199000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:30.216491 systemd-logind[1794]: New session 23 of user core. Jun 25 14:18:30.221020 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 14:18:30.230000 audit[5591]: USER_START pid=5591 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.233000 audit[5593]: CRED_ACQ pid=5593 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.487774 sshd[5591]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:30.489000 audit[5591]: USER_END pid=5591 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.490000 audit[5591]: CRED_DISP pid=5591 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.493839 systemd-logind[1794]: Session 23 logged out. Waiting for processes to exit. Jun 25 14:18:30.494501 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 14:18:30.495699 systemd[1]: sshd@22-172.31.16.245:22-139.178.68.195:34668.service: Deactivated successfully. Jun 25 14:18:30.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.16.245:22-139.178.68.195:34668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:30.498071 systemd-logind[1794]: Removed session 23. Jun 25 14:18:35.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.16.245:22-139.178.68.195:34674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:35.530418 systemd[1]: Started sshd@23-172.31.16.245:22-139.178.68.195:34674.service - OpenSSH per-connection server daemon (139.178.68.195:34674). Jun 25 14:18:35.534712 kernel: kauditd_printk_skb: 7 callbacks suppressed Jun 25 14:18:35.534877 kernel: audit: type=1130 audit(1719325115.530:784): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.16.245:22-139.178.68.195:34674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:35.712203 sshd[5607]: Accepted publickey for core from 139.178.68.195 port 34674 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:35.711000 audit[5607]: USER_ACCT pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:35.716000 audit[5607]: CRED_ACQ pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:35.718402 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:35.721697 kernel: audit: type=1101 audit(1719325115.711:785): pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:35.721880 kernel: audit: type=1103 audit(1719325115.716:786): pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:35.722436 kernel: audit: type=1006 audit(1719325115.716:787): pid=5607 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 14:18:35.716000 audit[5607]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffce115630 a2=3 a3=1 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.730054 kernel: audit: type=1300 audit(1719325115.716:787): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffce115630 a2=3 a3=1 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.716000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:35.732472 kernel: audit: type=1327 audit(1719325115.716:787): proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:35.734998 systemd-logind[1794]: New session 24 of user core. Jun 25 14:18:35.740975 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 14:18:35.750000 audit[5607]: USER_START pid=5607 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:35.756795 kernel: audit: type=1105 audit(1719325115.750:788): pid=5607 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:35.756000 audit[5609]: CRED_ACQ pid=5609 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:35.761703 kernel: audit: type=1103 audit(1719325115.756:789): pid=5609 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:36.000625 sshd[5607]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:36.002000 audit[5607]: USER_END pid=5607 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:36.006033 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 14:18:36.007698 systemd[1]: sshd@23-172.31.16.245:22-139.178.68.195:34674.service: Deactivated successfully. Jun 25 14:18:36.002000 audit[5607]: CRED_DISP pid=5607 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:36.013633 kernel: audit: type=1106 audit(1719325116.002:790): pid=5607 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:36.013810 kernel: audit: type=1104 audit(1719325116.002:791): pid=5607 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:36.013945 systemd-logind[1794]: Session 24 logged out. Waiting for processes to exit. Jun 25 14:18:36.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.16.245:22-139.178.68.195:34674 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:36.016455 systemd-logind[1794]: Removed session 24. Jun 25 14:18:41.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.16.245:22-139.178.68.195:52146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:41.038388 systemd[1]: Started sshd@24-172.31.16.245:22-139.178.68.195:52146.service - OpenSSH per-connection server daemon (139.178.68.195:52146). Jun 25 14:18:41.041688 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:18:41.041823 kernel: audit: type=1130 audit(1719325121.038:793): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.16.245:22-139.178.68.195:52146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:41.211000 audit[5625]: USER_ACCT pid=5625 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:41.212874 sshd[5625]: Accepted publickey for core from 139.178.68.195 port 52146 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:41.217787 kernel: audit: type=1101 audit(1719325121.211:794): pid=5625 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:41.218000 audit[5625]: CRED_ACQ pid=5625 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:41.220388 sshd[5625]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:41.226739 kernel: audit: type=1103 audit(1719325121.218:795): pid=5625 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:41.226889 kernel: audit: type=1006 audit(1719325121.218:796): pid=5625 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 14:18:41.218000 audit[5625]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff052d0a0 a2=3 a3=1 items=0 ppid=1 pid=5625 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:41.231788 kernel: audit: type=1300 audit(1719325121.218:796): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff052d0a0 a2=3 a3=1 items=0 ppid=1 pid=5625 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:41.218000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:41.235717 kernel: audit: type=1327 audit(1719325121.218:796): proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:41.241082 systemd-logind[1794]: New session 25 of user core. Jun 25 14:18:41.247993 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 14:18:41.258000 audit[5625]: USER_START pid=5625 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:41.261000 audit[5627]: CRED_ACQ pid=5627 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:41.268205 kernel: audit: type=1105 audit(1719325121.258:797): pid=5625 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:41.268299 kernel: audit: type=1103 audit(1719325121.261:798): pid=5627 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:41.489831 sshd[5625]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:41.491000 audit[5625]: USER_END pid=5625 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:41.495163 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 14:18:41.491000 audit[5625]: CRED_DISP pid=5625 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:41.496785 systemd[1]: sshd@24-172.31.16.245:22-139.178.68.195:52146.service: Deactivated successfully. Jun 25 14:18:41.502278 kernel: audit: type=1106 audit(1719325121.491:799): pid=5625 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:41.502427 kernel: audit: type=1104 audit(1719325121.491:800): pid=5625 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:41.502408 systemd-logind[1794]: Session 25 logged out. Waiting for processes to exit. Jun 25 14:18:41.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.16.245:22-139.178.68.195:52146 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:41.505496 systemd-logind[1794]: Removed session 25. Jun 25 14:18:42.661762 systemd[1]: run-containerd-runc-k8s.io-6df28bc0795455b3ee72364f0effa6f09696aa8853d64e61d13f40e256785701-runc.GBgWDV.mount: Deactivated successfully. Jun 25 14:18:45.875000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-client.crt" dev="overlay" ino=7802 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:18:45.875000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=75 a1=400a0a79e0 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:18:45.875000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:18:45.876000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/apiserver.crt" dev="overlay" ino=185 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:18:45.876000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=75 a1=400a0a7b00 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:18:45.876000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:18:45.875000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:18:45.875000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=76 a1=40087fcf90 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:18:45.875000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:18:45.898000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:18:45.898000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=75 a1=400a22e720 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:18:45.898000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:18:45.922000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:18:45.922000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001445c80 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:18:45.922000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:18:45.926000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:18:45.926000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4000e2e8e0 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:18:45.926000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:18:46.028000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:18:46.028000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=75 a1=40044f8980 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:18:46.028000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:18:46.029000 audit[2808]: AVC avc: denied { watch } for pid=2808 comm="kube-apiserver" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c120,c956 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:18:46.029000 audit[2808]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=75 a1=4009da2030 a2=fc6 a3=0 items=0 ppid=2657 pid=2808 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-apiserver" exe="/usr/local/bin/kube-apiserver" subj=system_u:system_r:container_t:s0:c120,c956 key=(null) Jun 25 14:18:46.029000 audit: PROCTITLE proctitle=6B7562652D617069736572766572002D2D6164766572746973652D616464726573733D3137322E33312E31362E323435002D2D616C6C6F772D70726976696C656765643D74727565002D2D617574686F72697A6174696F6E2D6D6F64653D4E6F64652C52424143002D2D636C69656E742D63612D66696C653D2F6574632F6B75 Jun 25 14:18:46.536890 kernel: kauditd_printk_skb: 25 callbacks suppressed Jun 25 14:18:46.537074 kernel: audit: type=1130 audit(1719325126.530:810): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.16.245:22-139.178.68.195:52148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:46.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.16.245:22-139.178.68.195:52148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:46.530450 systemd[1]: Started sshd@25-172.31.16.245:22-139.178.68.195:52148.service - OpenSSH per-connection server daemon (139.178.68.195:52148). Jun 25 14:18:46.708000 audit[5660]: USER_ACCT pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:46.709550 sshd[5660]: Accepted publickey for core from 139.178.68.195 port 52148 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:46.715749 kernel: audit: type=1101 audit(1719325126.708:811): pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:46.715000 audit[5660]: CRED_ACQ pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:46.717507 sshd[5660]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:46.721698 kernel: audit: type=1103 audit(1719325126.715:812): pid=5660 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:46.715000 audit[5660]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3f38660 a2=3 a3=1 items=0 ppid=1 pid=5660 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:46.728766 systemd-logind[1794]: New session 26 of user core. Jun 25 14:18:46.737168 kernel: audit: type=1006 audit(1719325126.715:813): pid=5660 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 14:18:46.737232 kernel: audit: type=1300 audit(1719325126.715:813): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3f38660 a2=3 a3=1 items=0 ppid=1 pid=5660 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:46.737286 kernel: audit: type=1327 audit(1719325126.715:813): proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:46.715000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:46.737065 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 14:18:46.747000 audit[5660]: USER_START pid=5660 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:46.751000 audit[5662]: CRED_ACQ pid=5662 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:46.757550 kernel: audit: type=1105 audit(1719325126.747:814): pid=5660 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:46.757628 kernel: audit: type=1103 audit(1719325126.751:815): pid=5662 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:46.985617 sshd[5660]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:46.986000 audit[5660]: USER_END pid=5660 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:46.990455 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 14:18:46.991730 systemd[1]: sshd@25-172.31.16.245:22-139.178.68.195:52148.service: Deactivated successfully. Jun 25 14:18:46.994579 systemd-logind[1794]: Session 26 logged out. Waiting for processes to exit. Jun 25 14:18:46.987000 audit[5660]: CRED_DISP pid=5660 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:46.998716 kernel: audit: type=1106 audit(1719325126.986:816): pid=5660 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:46.998876 kernel: audit: type=1104 audit(1719325126.987:817): pid=5660 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:47.000928 systemd-logind[1794]: Removed session 26. Jun 25 14:18:46.987000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.16.245:22-139.178.68.195:52148 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:49.054785 systemd[1]: run-containerd-runc-k8s.io-4eba7e98af4fc55adef81bc45ca3b681d89902e80af2ccb59b0924dbcc60a8e6-runc.DK0GQP.mount: Deactivated successfully. Jun 25 14:18:52.025706 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:18:52.025874 kernel: audit: type=1130 audit(1719325132.023:819): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.16.245:22-139.178.68.195:51462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:52.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.16.245:22-139.178.68.195:51462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:52.023925 systemd[1]: Started sshd@26-172.31.16.245:22-139.178.68.195:51462.service - OpenSSH per-connection server daemon (139.178.68.195:51462). Jun 25 14:18:52.198323 sshd[5698]: Accepted publickey for core from 139.178.68.195 port 51462 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:52.197000 audit[5698]: USER_ACCT pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:52.202000 audit[5698]: CRED_ACQ pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:52.208559 kernel: audit: type=1101 audit(1719325132.197:820): pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:52.208967 kernel: audit: type=1103 audit(1719325132.202:821): pid=5698 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:52.204462 sshd[5698]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:52.212397 kernel: audit: type=1006 audit(1719325132.203:822): pid=5698 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 14:18:52.212518 kernel: audit: type=1300 audit(1719325132.203:822): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe0c9b640 a2=3 a3=1 items=0 ppid=1 pid=5698 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:52.203000 audit[5698]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe0c9b640 a2=3 a3=1 items=0 ppid=1 pid=5698 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:52.217395 kernel: audit: type=1327 audit(1719325132.203:822): proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:52.203000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:52.216415 systemd-logind[1794]: New session 27 of user core. Jun 25 14:18:52.223990 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 14:18:52.235000 audit[5698]: USER_START pid=5698 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:52.241842 kernel: audit: type=1105 audit(1719325132.235:823): pid=5698 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:52.242000 audit[5700]: CRED_ACQ pid=5700 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:52.248816 kernel: audit: type=1103 audit(1719325132.242:824): pid=5700 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:52.521981 sshd[5698]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:52.523000 audit[5698]: USER_END pid=5698 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:52.527059 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 14:18:52.528353 systemd[1]: sshd@26-172.31.16.245:22-139.178.68.195:51462.service: Deactivated successfully. Jun 25 14:18:52.523000 audit[5698]: CRED_DISP pid=5698 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:52.535206 kernel: audit: type=1106 audit(1719325132.523:825): pid=5698 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:52.535344 kernel: audit: type=1104 audit(1719325132.523:826): pid=5698 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:52.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.16.245:22-139.178.68.195:51462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:52.535954 systemd-logind[1794]: Session 27 logged out. Waiting for processes to exit. Jun 25 14:18:52.537901 systemd-logind[1794]: Removed session 27. Jun 25 14:18:55.746278 systemd[1]: run-containerd-runc-k8s.io-4eba7e98af4fc55adef81bc45ca3b681d89902e80af2ccb59b0924dbcc60a8e6-runc.mxRBfL.mount: Deactivated successfully. Jun 25 14:18:57.563772 systemd[1]: Started sshd@27-172.31.16.245:22-139.178.68.195:51466.service - OpenSSH per-connection server daemon (139.178.68.195:51466). Jun 25 14:18:57.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.16.245:22-139.178.68.195:51466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:57.569637 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:18:57.569999 kernel: audit: type=1130 audit(1719325137.563:828): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.16.245:22-139.178.68.195:51466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:57.731000 audit[5734]: USER_ACCT pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:57.732863 sshd[5734]: Accepted publickey for core from 139.178.68.195 port 51466 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:57.737711 kernel: audit: type=1101 audit(1719325137.731:829): pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:57.738000 audit[5734]: CRED_ACQ pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:57.740181 sshd[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:57.746207 kernel: audit: type=1103 audit(1719325137.738:830): pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:57.746356 kernel: audit: type=1006 audit(1719325137.738:831): pid=5734 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jun 25 14:18:57.746414 kernel: audit: type=1300 audit(1719325137.738:831): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd6a24d00 a2=3 a3=1 items=0 ppid=1 pid=5734 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:57.738000 audit[5734]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd6a24d00 a2=3 a3=1 items=0 ppid=1 pid=5734 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:57.738000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:57.752597 kernel: audit: type=1327 audit(1719325137.738:831): proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:57.758227 systemd-logind[1794]: New session 28 of user core. Jun 25 14:18:57.761938 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 14:18:57.771000 audit[5734]: USER_START pid=5734 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:57.780946 kernel: audit: type=1105 audit(1719325137.771:832): pid=5734 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:57.781089 kernel: audit: type=1103 audit(1719325137.777:833): pid=5737 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:57.777000 audit[5737]: CRED_ACQ pid=5737 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:58.014633 sshd[5734]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:58.015000 audit[5734]: USER_END pid=5734 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:58.020863 systemd-logind[1794]: Session 28 logged out. Waiting for processes to exit. Jun 25 14:18:58.016000 audit[5734]: CRED_DISP pid=5734 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:58.026774 kernel: audit: type=1106 audit(1719325138.015:834): pid=5734 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:58.026898 kernel: audit: type=1104 audit(1719325138.016:835): pid=5734 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:58.023048 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 14:18:58.024936 systemd-logind[1794]: Removed session 28. Jun 25 14:18:58.026018 systemd[1]: sshd@27-172.31.16.245:22-139.178.68.195:51466.service: Deactivated successfully. Jun 25 14:18:58.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.16.245:22-139.178.68.195:51466 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:03.041000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:19:03.047789 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:19:03.047907 kernel: audit: type=1400 audit(1719325143.041:837): avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:19:03.041000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001478620 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:19:03.053696 kernel: audit: type=1300 audit(1719325143.041:837): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001478620 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:19:03.041000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:19:03.058709 kernel: audit: type=1327 audit(1719325143.041:837): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:19:03.042000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:19:03.063043 kernel: audit: type=1400 audit(1719325143.042:838): avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:19:03.063122 kernel: audit: type=1300 audit(1719325143.042:838): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40016d2720 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:19:03.042000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40016d2720 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:19:03.042000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:19:03.073272 kernel: audit: type=1327 audit(1719325143.042:838): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:19:03.073351 kernel: audit: type=1400 audit(1719325143.042:839): avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:19:03.042000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:19:03.042000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001478640 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:19:03.083004 kernel: audit: type=1300 audit(1719325143.042:839): arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=4001478640 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:19:03.042000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:19:03.087805 kernel: audit: type=1327 audit(1719325143.042:839): proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:19:03.047000 audit[2825]: AVC avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:19:03.092043 kernel: audit: type=1400 audit(1719325143.047:840): avc: denied { watch } for pid=2825 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:19:03.047000 audit[2825]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=a a1=40014787e0 a2=fc6 a3=0 items=0 ppid=2659 pid=2825 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:19:03.047000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:19:11.667000 audit: BPF prog-id=116 op=UNLOAD Jun 25 14:19:11.667640 systemd[1]: cri-containerd-596b797d3be91e9d33beaf0db509e04844ef25980c78fabd7131dfa0f990e369.scope: Deactivated successfully. Jun 25 14:19:11.669346 kernel: kauditd_printk_skb: 2 callbacks suppressed Jun 25 14:19:11.669403 kernel: audit: type=1334 audit(1719325151.667:841): prog-id=116 op=UNLOAD Jun 25 14:19:11.668244 systemd[1]: cri-containerd-596b797d3be91e9d33beaf0db509e04844ef25980c78fabd7131dfa0f990e369.scope: Consumed 9.927s CPU time. Jun 25 14:19:11.671000 audit: BPF prog-id=119 op=UNLOAD Jun 25 14:19:11.674691 kernel: audit: type=1334 audit(1719325151.671:842): prog-id=119 op=UNLOAD Jun 25 14:19:11.716164 containerd[1804]: time="2024-06-25T14:19:11.706142231Z" level=info msg="shim disconnected" id=596b797d3be91e9d33beaf0db509e04844ef25980c78fabd7131dfa0f990e369 namespace=k8s.io Jun 25 14:19:11.716164 containerd[1804]: time="2024-06-25T14:19:11.706235819Z" level=warning msg="cleaning up after shim disconnected" id=596b797d3be91e9d33beaf0db509e04844ef25980c78fabd7131dfa0f990e369 namespace=k8s.io Jun 25 14:19:11.716164 containerd[1804]: time="2024-06-25T14:19:11.706258763Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:19:11.715023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-596b797d3be91e9d33beaf0db509e04844ef25980c78fabd7131dfa0f990e369-rootfs.mount: Deactivated successfully. Jun 25 14:19:12.082473 systemd[1]: cri-containerd-32bff08419aba10b4de7c747e53516f1decc1023bf8445f719384cac28bf32ef.scope: Deactivated successfully. Jun 25 14:19:12.083007 systemd[1]: cri-containerd-32bff08419aba10b4de7c747e53516f1decc1023bf8445f719384cac28bf32ef.scope: Consumed 5.088s CPU time. Jun 25 14:19:12.092885 kernel: audit: type=1334 audit(1719325152.089:843): prog-id=84 op=UNLOAD Jun 25 14:19:12.093069 kernel: audit: type=1334 audit(1719325152.089:844): prog-id=99 op=UNLOAD Jun 25 14:19:12.089000 audit: BPF prog-id=84 op=UNLOAD Jun 25 14:19:12.089000 audit: BPF prog-id=99 op=UNLOAD Jun 25 14:19:12.133802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32bff08419aba10b4de7c747e53516f1decc1023bf8445f719384cac28bf32ef-rootfs.mount: Deactivated successfully. Jun 25 14:19:12.136614 containerd[1804]: time="2024-06-25T14:19:12.136529766Z" level=info msg="shim disconnected" id=32bff08419aba10b4de7c747e53516f1decc1023bf8445f719384cac28bf32ef namespace=k8s.io Jun 25 14:19:12.136614 containerd[1804]: time="2024-06-25T14:19:12.136605750Z" level=warning msg="cleaning up after shim disconnected" id=32bff08419aba10b4de7c747e53516f1decc1023bf8445f719384cac28bf32ef namespace=k8s.io Jun 25 14:19:12.136975 containerd[1804]: time="2024-06-25T14:19:12.136629606Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:19:12.188555 kubelet[3044]: E0625 14:19:12.188163 3044 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-245?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 14:19:12.389448 kubelet[3044]: I0625 14:19:12.389292 3044 scope.go:117] "RemoveContainer" containerID="32bff08419aba10b4de7c747e53516f1decc1023bf8445f719384cac28bf32ef" Jun 25 14:19:12.394918 kubelet[3044]: I0625 14:19:12.394325 3044 scope.go:117] "RemoveContainer" containerID="596b797d3be91e9d33beaf0db509e04844ef25980c78fabd7131dfa0f990e369" Jun 25 14:19:12.395455 containerd[1804]: time="2024-06-25T14:19:12.395390078Z" level=info msg="CreateContainer within sandbox \"cb7debc610bee626c7867976c8bce0e59e563a183e7c4928010ce54fec99cd7d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jun 25 14:19:12.399088 containerd[1804]: time="2024-06-25T14:19:12.399033102Z" level=info msg="CreateContainer within sandbox \"18d471f34dfd43d1f794a95c317d23eb51474ca481f667968daff4af864fb295\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jun 25 14:19:12.431627 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4065907233.mount: Deactivated successfully. Jun 25 14:19:12.436072 containerd[1804]: time="2024-06-25T14:19:12.435995465Z" level=info msg="CreateContainer within sandbox \"cb7debc610bee626c7867976c8bce0e59e563a183e7c4928010ce54fec99cd7d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"2c3eef99b74685a3df31217b717f17e835f48171a732f9eba84cadf7461afc45\"" Jun 25 14:19:12.437026 containerd[1804]: time="2024-06-25T14:19:12.436978330Z" level=info msg="StartContainer for \"2c3eef99b74685a3df31217b717f17e835f48171a732f9eba84cadf7461afc45\"" Jun 25 14:19:12.446063 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2572606514.mount: Deactivated successfully. Jun 25 14:19:12.446978 containerd[1804]: time="2024-06-25T14:19:12.446906959Z" level=info msg="CreateContainer within sandbox \"18d471f34dfd43d1f794a95c317d23eb51474ca481f667968daff4af864fb295\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"c4ae2b150af076a45023af027a4561958f1bd44f6ae0032af54f4f35cc937dea\"" Jun 25 14:19:12.448407 containerd[1804]: time="2024-06-25T14:19:12.448344193Z" level=info msg="StartContainer for \"c4ae2b150af076a45023af027a4561958f1bd44f6ae0032af54f4f35cc937dea\"" Jun 25 14:19:12.491011 systemd[1]: Started cri-containerd-2c3eef99b74685a3df31217b717f17e835f48171a732f9eba84cadf7461afc45.scope - libcontainer container 2c3eef99b74685a3df31217b717f17e835f48171a732f9eba84cadf7461afc45. Jun 25 14:19:12.506064 systemd[1]: Started cri-containerd-c4ae2b150af076a45023af027a4561958f1bd44f6ae0032af54f4f35cc937dea.scope - libcontainer container c4ae2b150af076a45023af027a4561958f1bd44f6ae0032af54f4f35cc937dea. Jun 25 14:19:12.522000 audit: BPF prog-id=187 op=LOAD Jun 25 14:19:12.525708 kernel: audit: type=1334 audit(1719325152.522:845): prog-id=187 op=LOAD Jun 25 14:19:12.524000 audit: BPF prog-id=188 op=LOAD Jun 25 14:19:12.524000 audit[5841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=2659 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:12.532609 kernel: audit: type=1334 audit(1719325152.524:846): prog-id=188 op=LOAD Jun 25 14:19:12.532810 kernel: audit: type=1300 audit(1719325152.524:846): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a98b0 a2=78 a3=0 items=0 ppid=2659 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:12.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336565663939623734363835613364663331323137623731376631 Jun 25 14:19:12.537797 kernel: audit: type=1327 audit(1719325152.524:846): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336565663939623734363835613364663331323137623731376631 Jun 25 14:19:12.524000 audit: BPF prog-id=189 op=LOAD Jun 25 14:19:12.540183 kernel: audit: type=1334 audit(1719325152.524:847): prog-id=189 op=LOAD Jun 25 14:19:12.545464 kernel: audit: type=1300 audit(1719325152.524:847): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=2659 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:12.524000 audit[5841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001a9640 a2=78 a3=0 items=0 ppid=2659 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:12.524000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336565663939623734363835613364663331323137623731376631 Jun 25 14:19:12.525000 audit: BPF prog-id=189 op=UNLOAD Jun 25 14:19:12.525000 audit: BPF prog-id=188 op=UNLOAD Jun 25 14:19:12.525000 audit: BPF prog-id=190 op=LOAD Jun 25 14:19:12.525000 audit[5841]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001a9b10 a2=78 a3=0 items=0 ppid=2659 pid=5841 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:12.525000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3263336565663939623734363835613364663331323137623731376631 Jun 25 14:19:12.544000 audit: BPF prog-id=191 op=LOAD Jun 25 14:19:12.545000 audit: BPF prog-id=192 op=LOAD Jun 25 14:19:12.545000 audit[5851]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010d8b0 a2=78 a3=0 items=0 ppid=3334 pid=5851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:12.545000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334616532623135306166303736613435303233616630323761343536 Jun 25 14:19:12.546000 audit: BPF prog-id=193 op=LOAD Jun 25 14:19:12.546000 audit[5851]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400010d640 a2=78 a3=0 items=0 ppid=3334 pid=5851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:12.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334616532623135306166303736613435303233616630323761343536 Jun 25 14:19:12.546000 audit: BPF prog-id=193 op=UNLOAD Jun 25 14:19:12.546000 audit: BPF prog-id=192 op=UNLOAD Jun 25 14:19:12.546000 audit: BPF prog-id=194 op=LOAD Jun 25 14:19:12.546000 audit[5851]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400010db10 a2=78 a3=0 items=0 ppid=3334 pid=5851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:12.546000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334616532623135306166303736613435303233616630323761343536 Jun 25 14:19:12.575455 containerd[1804]: time="2024-06-25T14:19:12.575390904Z" level=info msg="StartContainer for \"c4ae2b150af076a45023af027a4561958f1bd44f6ae0032af54f4f35cc937dea\" returns successfully" Jun 25 14:19:12.618025 containerd[1804]: time="2024-06-25T14:19:12.617963041Z" level=info msg="StartContainer for \"2c3eef99b74685a3df31217b717f17e835f48171a732f9eba84cadf7461afc45\" returns successfully" Jun 25 14:19:16.261000 audit[5865]: AVC avc: denied { watch } for pid=5865 comm="kube-controller" path="/etc/kubernetes/pki/front-proxy-ca.crt" dev="overlay" ino=7800 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:19:16.261000 audit[5865]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=7 a1=4000055200 a2=fc6 a3=0 items=0 ppid=2659 pid=5865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:19:16.261000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:19:16.263000 audit[5865]: AVC avc: denied { watch } for pid=5865 comm="kube-controller" path="/etc/kubernetes/pki/ca.crt" dev="overlay" ino=7796 scontext=system_u:system_r:container_t:s0:c96,c543 tcontext=system_u:object_r:etc_t:s0 tclass=file permissive=0 Jun 25 14:19:16.263000 audit[5865]: SYSCALL arch=c00000b7 syscall=27 success=no exit=-13 a0=8 a1=40003d77a0 a2=fc6 a3=0 items=0 ppid=2659 pid=5865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kube-controller" exe="/usr/local/bin/kube-controller-manager" subj=system_u:system_r:container_t:s0:c96,c543 key=(null) Jun 25 14:19:16.263000 audit: PROCTITLE proctitle=6B7562652D636F6E74726F6C6C65722D6D616E61676572002D2D616C6C6F636174652D6E6F64652D63696472733D74727565002D2D61757468656E7469636174696F6E2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F636F6E74726F6C6C65722D6D616E616765722E636F6E66002D2D617574686F7269 Jun 25 14:19:17.691809 systemd[1]: cri-containerd-2e80d2d01d7a64b456e0c1fa9def66956bb999711866bcc47564d8ce452c04a0.scope: Deactivated successfully. Jun 25 14:19:17.692305 systemd[1]: cri-containerd-2e80d2d01d7a64b456e0c1fa9def66956bb999711866bcc47564d8ce452c04a0.scope: Consumed 3.758s CPU time. Jun 25 14:19:17.695000 audit: BPF prog-id=76 op=UNLOAD Jun 25 14:19:17.697297 kernel: kauditd_printk_skb: 24 callbacks suppressed Jun 25 14:19:17.697425 kernel: audit: type=1334 audit(1719325157.695:859): prog-id=76 op=UNLOAD Jun 25 14:19:17.695000 audit: BPF prog-id=96 op=UNLOAD Jun 25 14:19:17.699786 kernel: audit: type=1334 audit(1719325157.695:860): prog-id=96 op=UNLOAD Jun 25 14:19:17.737525 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e80d2d01d7a64b456e0c1fa9def66956bb999711866bcc47564d8ce452c04a0-rootfs.mount: Deactivated successfully. Jun 25 14:19:17.740260 containerd[1804]: time="2024-06-25T14:19:17.737985238Z" level=info msg="shim disconnected" id=2e80d2d01d7a64b456e0c1fa9def66956bb999711866bcc47564d8ce452c04a0 namespace=k8s.io Jun 25 14:19:17.740260 containerd[1804]: time="2024-06-25T14:19:17.738055918Z" level=warning msg="cleaning up after shim disconnected" id=2e80d2d01d7a64b456e0c1fa9def66956bb999711866bcc47564d8ce452c04a0 namespace=k8s.io Jun 25 14:19:17.740260 containerd[1804]: time="2024-06-25T14:19:17.738076822Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:19:18.419547 kubelet[3044]: I0625 14:19:18.419425 3044 scope.go:117] "RemoveContainer" containerID="2e80d2d01d7a64b456e0c1fa9def66956bb999711866bcc47564d8ce452c04a0" Jun 25 14:19:18.423929 containerd[1804]: time="2024-06-25T14:19:18.423865898Z" level=info msg="CreateContainer within sandbox \"e9130a9e279d286d2d3ba03a442da26f11c64aa21e109fb3104f684aaafeed73\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jun 25 14:19:18.459527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount525602798.mount: Deactivated successfully. Jun 25 14:19:18.468047 containerd[1804]: time="2024-06-25T14:19:18.467966584Z" level=info msg="CreateContainer within sandbox \"e9130a9e279d286d2d3ba03a442da26f11c64aa21e109fb3104f684aaafeed73\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9b2f73a512ea872381be7b8c815520a48bca7ebc984477e0826c9b65c7ef9098\"" Jun 25 14:19:18.469193 containerd[1804]: time="2024-06-25T14:19:18.469145805Z" level=info msg="StartContainer for \"9b2f73a512ea872381be7b8c815520a48bca7ebc984477e0826c9b65c7ef9098\"" Jun 25 14:19:18.473697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount987634180.mount: Deactivated successfully. Jun 25 14:19:18.517991 systemd[1]: Started cri-containerd-9b2f73a512ea872381be7b8c815520a48bca7ebc984477e0826c9b65c7ef9098.scope - libcontainer container 9b2f73a512ea872381be7b8c815520a48bca7ebc984477e0826c9b65c7ef9098. Jun 25 14:19:18.537000 audit: BPF prog-id=195 op=LOAD Jun 25 14:19:18.539694 kernel: audit: type=1334 audit(1719325158.537:861): prog-id=195 op=LOAD Jun 25 14:19:18.538000 audit: BPF prog-id=196 op=LOAD Jun 25 14:19:18.538000 audit[5959]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2658 pid=5959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:18.545980 kernel: audit: type=1334 audit(1719325158.538:862): prog-id=196 op=LOAD Jun 25 14:19:18.546069 kernel: audit: type=1300 audit(1719325158.538:862): arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018d8b0 a2=78 a3=0 items=0 ppid=2658 pid=5959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:18.546125 kernel: audit: type=1327 audit(1719325158.538:862): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962326637336135313265613837323338316265376238633831353532 Jun 25 14:19:18.538000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962326637336135313265613837323338316265376238633831353532 Jun 25 14:19:18.539000 audit: BPF prog-id=197 op=LOAD Jun 25 14:19:18.552756 kernel: audit: type=1334 audit(1719325158.539:863): prog-id=197 op=LOAD Jun 25 14:19:18.552852 kernel: audit: type=1300 audit(1719325158.539:863): arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2658 pid=5959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:18.539000 audit[5959]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400018d640 a2=78 a3=0 items=0 ppid=2658 pid=5959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:18.539000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962326637336135313265613837323338316265376238633831353532 Jun 25 14:19:18.562530 kernel: audit: type=1327 audit(1719325158.539:863): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962326637336135313265613837323338316265376238633831353532 Jun 25 14:19:18.540000 audit: BPF prog-id=197 op=UNLOAD Jun 25 14:19:18.564383 kernel: audit: type=1334 audit(1719325158.540:864): prog-id=197 op=UNLOAD Jun 25 14:19:18.540000 audit: BPF prog-id=196 op=UNLOAD Jun 25 14:19:18.540000 audit: BPF prog-id=198 op=LOAD Jun 25 14:19:18.540000 audit[5959]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400018db10 a2=78 a3=0 items=0 ppid=2658 pid=5959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:18.540000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3962326637336135313265613837323338316265376238633831353532 Jun 25 14:19:18.601379 containerd[1804]: time="2024-06-25T14:19:18.601318946Z" level=info msg="StartContainer for \"9b2f73a512ea872381be7b8c815520a48bca7ebc984477e0826c9b65c7ef9098\" returns successfully" Jun 25 14:19:22.188857 kubelet[3044]: E0625 14:19:22.188781 3044 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-245?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jun 25 14:19:32.189599 kubelet[3044]: E0625 14:19:32.189524 3044 controller.go:195] "Failed to update lease" err="Put \"https://172.31.16.245:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-245?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)"