Feb 12 20:22:19.962656 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Feb 12 20:22:19.962696 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Feb 12 18:07:00 -00 2024 Feb 12 20:22:19.962719 kernel: efi: EFI v2.70 by EDK II Feb 12 20:22:19.962734 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x71a8cf98 Feb 12 20:22:19.962748 kernel: ACPI: Early table checksum verification disabled Feb 12 20:22:19.962761 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Feb 12 20:22:19.962777 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Feb 12 20:22:19.962792 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Feb 12 20:22:19.962805 kernel: ACPI: DSDT 0x0000000078640000 00154F (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Feb 12 20:22:19.962819 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Feb 12 20:22:19.962838 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Feb 12 20:22:19.962852 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Feb 12 20:22:19.962866 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Feb 12 20:22:19.962879 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Feb 12 20:22:19.962896 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Feb 12 20:22:19.962915 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Feb 12 20:22:19.962930 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Feb 12 20:22:19.962945 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Feb 12 20:22:19.962959 kernel: printk: bootconsole [uart0] enabled Feb 12 20:22:19.962974 kernel: NUMA: Failed to initialise from firmware Feb 12 20:22:19.962988 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Feb 12 20:22:19.963003 kernel: NUMA: NODE_DATA [mem 0x4b5841900-0x4b5846fff] Feb 12 20:22:19.963018 kernel: Zone ranges: Feb 12 20:22:19.963032 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Feb 12 20:22:19.963047 kernel: DMA32 empty Feb 12 20:22:19.963062 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Feb 12 20:22:19.963082 kernel: Movable zone start for each node Feb 12 20:22:19.963096 kernel: Early memory node ranges Feb 12 20:22:19.963111 kernel: node 0: [mem 0x0000000040000000-0x00000000786effff] Feb 12 20:22:19.963126 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Feb 12 20:22:19.963140 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Feb 12 20:22:19.963154 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Feb 12 20:22:19.963169 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Feb 12 20:22:19.963183 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Feb 12 20:22:19.963198 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Feb 12 20:22:19.963212 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Feb 12 20:22:19.963227 kernel: psci: probing for conduit method from ACPI. Feb 12 20:22:19.963242 kernel: psci: PSCIv1.0 detected in firmware. Feb 12 20:22:19.963260 kernel: psci: Using standard PSCI v0.2 function IDs Feb 12 20:22:19.963275 kernel: psci: Trusted OS migration not required Feb 12 20:22:19.963296 kernel: psci: SMC Calling Convention v1.1 Feb 12 20:22:19.963313 kernel: ACPI: SRAT not present Feb 12 20:22:19.963328 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Feb 12 20:22:19.963348 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Feb 12 20:22:19.963364 kernel: pcpu-alloc: [0] 0 [0] 1 Feb 12 20:22:19.963379 kernel: Detected PIPT I-cache on CPU0 Feb 12 20:22:19.963395 kernel: CPU features: detected: GIC system register CPU interface Feb 12 20:22:19.963410 kernel: CPU features: detected: Spectre-v2 Feb 12 20:22:19.963425 kernel: CPU features: detected: Spectre-v3a Feb 12 20:22:19.963440 kernel: CPU features: detected: Spectre-BHB Feb 12 20:22:19.963455 kernel: CPU features: kernel page table isolation forced ON by KASLR Feb 12 20:22:19.963470 kernel: CPU features: detected: Kernel page table isolation (KPTI) Feb 12 20:22:19.963485 kernel: CPU features: detected: ARM erratum 1742098 Feb 12 20:22:19.963501 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Feb 12 20:22:19.963520 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Feb 12 20:22:19.963535 kernel: Policy zone: Normal Feb 12 20:22:19.963589 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 20:22:19.963607 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 12 20:22:19.963623 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 12 20:22:19.963638 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 12 20:22:19.963654 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 12 20:22:19.963670 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Feb 12 20:22:19.963686 kernel: Memory: 3826316K/4030464K available (9792K kernel code, 2092K rwdata, 7556K rodata, 34688K init, 778K bss, 204148K reserved, 0K cma-reserved) Feb 12 20:22:19.963702 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 12 20:22:19.963722 kernel: trace event string verifier disabled Feb 12 20:22:19.963738 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 12 20:22:19.963754 kernel: rcu: RCU event tracing is enabled. Feb 12 20:22:19.963770 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 12 20:22:19.963786 kernel: Trampoline variant of Tasks RCU enabled. Feb 12 20:22:19.963801 kernel: Tracing variant of Tasks RCU enabled. Feb 12 20:22:19.963817 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 12 20:22:19.963833 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 12 20:22:19.963848 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Feb 12 20:22:19.963863 kernel: GICv3: 96 SPIs implemented Feb 12 20:22:19.963878 kernel: GICv3: 0 Extended SPIs implemented Feb 12 20:22:19.963894 kernel: GICv3: Distributor has no Range Selector support Feb 12 20:22:19.963913 kernel: Root IRQ handler: gic_handle_irq Feb 12 20:22:19.963929 kernel: GICv3: 16 PPIs implemented Feb 12 20:22:19.963944 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Feb 12 20:22:19.963959 kernel: ACPI: SRAT not present Feb 12 20:22:19.963974 kernel: ITS [mem 0x10080000-0x1009ffff] Feb 12 20:22:19.963989 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Feb 12 20:22:19.964005 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Feb 12 20:22:19.964020 kernel: GICv3: using LPI property table @0x00000004000c0000 Feb 12 20:22:19.964036 kernel: ITS: Using hypervisor restricted LPI range [128] Feb 12 20:22:19.964051 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Feb 12 20:22:19.964066 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Feb 12 20:22:19.964086 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Feb 12 20:22:19.964102 kernel: sched_clock: 56 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Feb 12 20:22:19.964118 kernel: Console: colour dummy device 80x25 Feb 12 20:22:19.964134 kernel: printk: console [tty1] enabled Feb 12 20:22:19.964149 kernel: ACPI: Core revision 20210730 Feb 12 20:22:19.964165 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Feb 12 20:22:19.964181 kernel: pid_max: default: 32768 minimum: 301 Feb 12 20:22:19.964197 kernel: LSM: Security Framework initializing Feb 12 20:22:19.964213 kernel: SELinux: Initializing. Feb 12 20:22:19.964229 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:22:19.964249 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Feb 12 20:22:19.964265 kernel: rcu: Hierarchical SRCU implementation. Feb 12 20:22:19.964280 kernel: Platform MSI: ITS@0x10080000 domain created Feb 12 20:22:19.964296 kernel: PCI/MSI: ITS@0x10080000 domain created Feb 12 20:22:19.964312 kernel: Remapping and enabling EFI services. Feb 12 20:22:19.964328 kernel: smp: Bringing up secondary CPUs ... Feb 12 20:22:19.964343 kernel: Detected PIPT I-cache on CPU1 Feb 12 20:22:19.964359 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Feb 12 20:22:19.964375 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Feb 12 20:22:19.964395 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Feb 12 20:22:19.964411 kernel: smp: Brought up 1 node, 2 CPUs Feb 12 20:22:19.964426 kernel: SMP: Total of 2 processors activated. Feb 12 20:22:19.964442 kernel: CPU features: detected: 32-bit EL0 Support Feb 12 20:22:19.964457 kernel: CPU features: detected: 32-bit EL1 Support Feb 12 20:22:19.964473 kernel: CPU features: detected: CRC32 instructions Feb 12 20:22:19.964489 kernel: CPU: All CPU(s) started at EL1 Feb 12 20:22:19.964504 kernel: alternatives: patching kernel code Feb 12 20:22:19.964520 kernel: devtmpfs: initialized Feb 12 20:22:19.965587 kernel: KASLR disabled due to lack of seed Feb 12 20:22:19.965623 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 12 20:22:19.965642 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 12 20:22:19.965672 kernel: pinctrl core: initialized pinctrl subsystem Feb 12 20:22:19.965693 kernel: SMBIOS 3.0.0 present. Feb 12 20:22:19.965709 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Feb 12 20:22:19.965726 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 12 20:22:19.965742 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Feb 12 20:22:19.965758 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Feb 12 20:22:19.965775 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Feb 12 20:22:19.965791 kernel: audit: initializing netlink subsys (disabled) Feb 12 20:22:19.965808 kernel: audit: type=2000 audit(0.247:1): state=initialized audit_enabled=0 res=1 Feb 12 20:22:19.965829 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 12 20:22:19.965845 kernel: cpuidle: using governor menu Feb 12 20:22:19.965861 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Feb 12 20:22:19.965878 kernel: ASID allocator initialised with 32768 entries Feb 12 20:22:19.965894 kernel: ACPI: bus type PCI registered Feb 12 20:22:19.965915 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 12 20:22:19.965932 kernel: Serial: AMBA PL011 UART driver Feb 12 20:22:19.965948 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 12 20:22:19.965964 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Feb 12 20:22:19.965981 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 12 20:22:19.965997 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Feb 12 20:22:19.966013 kernel: cryptd: max_cpu_qlen set to 1000 Feb 12 20:22:19.966030 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Feb 12 20:22:19.966046 kernel: ACPI: Added _OSI(Module Device) Feb 12 20:22:19.966066 kernel: ACPI: Added _OSI(Processor Device) Feb 12 20:22:19.966082 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 12 20:22:19.966099 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 12 20:22:19.966115 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 12 20:22:19.966131 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 12 20:22:19.966148 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 12 20:22:19.966164 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 12 20:22:19.966180 kernel: ACPI: Interpreter enabled Feb 12 20:22:19.966196 kernel: ACPI: Using GIC for interrupt routing Feb 12 20:22:19.966217 kernel: ACPI: MCFG table detected, 1 entries Feb 12 20:22:19.966233 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Feb 12 20:22:19.966569 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 12 20:22:19.966789 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 12 20:22:19.966994 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 12 20:22:19.967205 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Feb 12 20:22:19.967416 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Feb 12 20:22:19.967449 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Feb 12 20:22:19.967467 kernel: acpiphp: Slot [1] registered Feb 12 20:22:19.967483 kernel: acpiphp: Slot [2] registered Feb 12 20:22:19.967500 kernel: acpiphp: Slot [3] registered Feb 12 20:22:19.967524 kernel: acpiphp: Slot [4] registered Feb 12 20:22:19.967583 kernel: acpiphp: Slot [5] registered Feb 12 20:22:19.967605 kernel: acpiphp: Slot [6] registered Feb 12 20:22:19.967622 kernel: acpiphp: Slot [7] registered Feb 12 20:22:19.967639 kernel: acpiphp: Slot [8] registered Feb 12 20:22:19.967661 kernel: acpiphp: Slot [9] registered Feb 12 20:22:19.967678 kernel: acpiphp: Slot [10] registered Feb 12 20:22:19.967694 kernel: acpiphp: Slot [11] registered Feb 12 20:22:19.967710 kernel: acpiphp: Slot [12] registered Feb 12 20:22:19.967726 kernel: acpiphp: Slot [13] registered Feb 12 20:22:19.967742 kernel: acpiphp: Slot [14] registered Feb 12 20:22:19.967758 kernel: acpiphp: Slot [15] registered Feb 12 20:22:19.967774 kernel: acpiphp: Slot [16] registered Feb 12 20:22:19.967790 kernel: acpiphp: Slot [17] registered Feb 12 20:22:19.967807 kernel: acpiphp: Slot [18] registered Feb 12 20:22:19.967828 kernel: acpiphp: Slot [19] registered Feb 12 20:22:19.967844 kernel: acpiphp: Slot [20] registered Feb 12 20:22:19.967860 kernel: acpiphp: Slot [21] registered Feb 12 20:22:19.967876 kernel: acpiphp: Slot [22] registered Feb 12 20:22:19.967892 kernel: acpiphp: Slot [23] registered Feb 12 20:22:19.967908 kernel: acpiphp: Slot [24] registered Feb 12 20:22:19.967925 kernel: acpiphp: Slot [25] registered Feb 12 20:22:19.967941 kernel: acpiphp: Slot [26] registered Feb 12 20:22:19.967957 kernel: acpiphp: Slot [27] registered Feb 12 20:22:19.967977 kernel: acpiphp: Slot [28] registered Feb 12 20:22:19.967994 kernel: acpiphp: Slot [29] registered Feb 12 20:22:19.968010 kernel: acpiphp: Slot [30] registered Feb 12 20:22:19.968026 kernel: acpiphp: Slot [31] registered Feb 12 20:22:19.968043 kernel: PCI host bridge to bus 0000:00 Feb 12 20:22:19.968267 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Feb 12 20:22:19.968454 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Feb 12 20:22:19.971776 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Feb 12 20:22:19.971991 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Feb 12 20:22:19.972216 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Feb 12 20:22:19.972434 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Feb 12 20:22:19.973814 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Feb 12 20:22:19.974041 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Feb 12 20:22:19.974243 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Feb 12 20:22:19.974449 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 12 20:22:19.974692 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Feb 12 20:22:19.974899 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Feb 12 20:22:19.975104 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Feb 12 20:22:19.975303 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Feb 12 20:22:19.975508 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 12 20:22:19.977762 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Feb 12 20:22:19.977975 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Feb 12 20:22:19.978178 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Feb 12 20:22:19.978378 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Feb 12 20:22:19.981660 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Feb 12 20:22:19.981880 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Feb 12 20:22:19.982061 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Feb 12 20:22:19.982242 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Feb 12 20:22:19.982273 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Feb 12 20:22:19.982291 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Feb 12 20:22:19.982309 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Feb 12 20:22:19.982325 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Feb 12 20:22:19.982342 kernel: iommu: Default domain type: Translated Feb 12 20:22:19.982359 kernel: iommu: DMA domain TLB invalidation policy: strict mode Feb 12 20:22:19.982376 kernel: vgaarb: loaded Feb 12 20:22:19.982392 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 12 20:22:19.982408 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 12 20:22:19.982429 kernel: PTP clock support registered Feb 12 20:22:19.982445 kernel: Registered efivars operations Feb 12 20:22:19.982462 kernel: clocksource: Switched to clocksource arch_sys_counter Feb 12 20:22:19.982478 kernel: VFS: Disk quotas dquot_6.6.0 Feb 12 20:22:19.982494 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 12 20:22:19.982511 kernel: pnp: PnP ACPI init Feb 12 20:22:19.982757 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Feb 12 20:22:19.982785 kernel: pnp: PnP ACPI: found 1 devices Feb 12 20:22:19.982802 kernel: NET: Registered PF_INET protocol family Feb 12 20:22:19.982824 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 12 20:22:19.982868 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Feb 12 20:22:19.982887 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 12 20:22:19.982904 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 12 20:22:19.982921 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Feb 12 20:22:19.982938 kernel: TCP: Hash tables configured (established 32768 bind 32768) Feb 12 20:22:19.982954 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:22:19.982971 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Feb 12 20:22:19.982988 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 12 20:22:19.983009 kernel: PCI: CLS 0 bytes, default 64 Feb 12 20:22:19.983026 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Feb 12 20:22:19.983042 kernel: kvm [1]: HYP mode not available Feb 12 20:22:19.983058 kernel: Initialise system trusted keyrings Feb 12 20:22:19.983075 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Feb 12 20:22:19.983091 kernel: Key type asymmetric registered Feb 12 20:22:19.983108 kernel: Asymmetric key parser 'x509' registered Feb 12 20:22:19.983124 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 12 20:22:19.983140 kernel: io scheduler mq-deadline registered Feb 12 20:22:19.983160 kernel: io scheduler kyber registered Feb 12 20:22:19.983177 kernel: io scheduler bfq registered Feb 12 20:22:19.983396 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Feb 12 20:22:19.990488 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Feb 12 20:22:19.990506 kernel: ACPI: button: Power Button [PWRB] Feb 12 20:22:19.990522 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 12 20:22:19.990593 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Feb 12 20:22:19.990820 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Feb 12 20:22:19.990850 kernel: printk: console [ttyS0] disabled Feb 12 20:22:19.990868 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Feb 12 20:22:19.990884 kernel: printk: console [ttyS0] enabled Feb 12 20:22:19.990900 kernel: printk: bootconsole [uart0] disabled Feb 12 20:22:19.990917 kernel: thunder_xcv, ver 1.0 Feb 12 20:22:19.990933 kernel: thunder_bgx, ver 1.0 Feb 12 20:22:19.990949 kernel: nicpf, ver 1.0 Feb 12 20:22:19.990965 kernel: nicvf, ver 1.0 Feb 12 20:22:19.991171 kernel: rtc-efi rtc-efi.0: registered as rtc0 Feb 12 20:22:19.991367 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-02-12T20:22:19 UTC (1707769339) Feb 12 20:22:19.991390 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 12 20:22:19.991407 kernel: NET: Registered PF_INET6 protocol family Feb 12 20:22:19.991423 kernel: Segment Routing with IPv6 Feb 12 20:22:19.991440 kernel: In-situ OAM (IOAM) with IPv6 Feb 12 20:22:19.991456 kernel: NET: Registered PF_PACKET protocol family Feb 12 20:22:19.991472 kernel: Key type dns_resolver registered Feb 12 20:22:19.991488 kernel: registered taskstats version 1 Feb 12 20:22:19.991509 kernel: Loading compiled-in X.509 certificates Feb 12 20:22:19.991526 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: c8c3faa6fd8ae0112832fff0e3d0e58448a7eb6c' Feb 12 20:22:19.991565 kernel: Key type .fscrypt registered Feb 12 20:22:19.991586 kernel: Key type fscrypt-provisioning registered Feb 12 20:22:19.991602 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 12 20:22:19.991619 kernel: ima: Allocated hash algorithm: sha1 Feb 12 20:22:19.991635 kernel: ima: No architecture policies found Feb 12 20:22:19.991652 kernel: Freeing unused kernel memory: 34688K Feb 12 20:22:19.991668 kernel: Run /init as init process Feb 12 20:22:19.991689 kernel: with arguments: Feb 12 20:22:19.991705 kernel: /init Feb 12 20:22:19.991721 kernel: with environment: Feb 12 20:22:19.991737 kernel: HOME=/ Feb 12 20:22:19.991753 kernel: TERM=linux Feb 12 20:22:19.991769 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 12 20:22:19.991789 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:22:19.991810 systemd[1]: Detected virtualization amazon. Feb 12 20:22:19.991832 systemd[1]: Detected architecture arm64. Feb 12 20:22:19.991850 systemd[1]: Running in initrd. Feb 12 20:22:19.991867 systemd[1]: No hostname configured, using default hostname. Feb 12 20:22:19.991884 systemd[1]: Hostname set to . Feb 12 20:22:19.991902 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:22:19.991920 systemd[1]: Queued start job for default target initrd.target. Feb 12 20:22:19.991937 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:22:19.991954 systemd[1]: Reached target cryptsetup.target. Feb 12 20:22:19.991975 systemd[1]: Reached target paths.target. Feb 12 20:22:19.991993 systemd[1]: Reached target slices.target. Feb 12 20:22:19.992011 systemd[1]: Reached target swap.target. Feb 12 20:22:19.992028 systemd[1]: Reached target timers.target. Feb 12 20:22:19.992046 systemd[1]: Listening on iscsid.socket. Feb 12 20:22:19.992064 systemd[1]: Listening on iscsiuio.socket. Feb 12 20:22:19.992081 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:22:19.992099 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:22:19.992121 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:22:19.992138 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:22:19.992156 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:22:19.992173 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:22:19.992191 systemd[1]: Reached target sockets.target. Feb 12 20:22:19.992209 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:22:19.992226 systemd[1]: Finished network-cleanup.service. Feb 12 20:22:19.992244 systemd[1]: Starting systemd-fsck-usr.service... Feb 12 20:22:19.992261 systemd[1]: Starting systemd-journald.service... Feb 12 20:22:19.992282 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:22:19.992300 systemd[1]: Starting systemd-resolved.service... Feb 12 20:22:19.992317 systemd[1]: Starting systemd-vconsole-setup.service... Feb 12 20:22:19.992335 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:22:19.992353 kernel: audit: type=1130 audit(1707769339.981:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:19.992370 systemd[1]: Finished systemd-fsck-usr.service. Feb 12 20:22:19.992391 systemd-journald[308]: Journal started Feb 12 20:22:19.992482 systemd-journald[308]: Runtime Journal (/run/log/journal/ec238894b67dc5d92b811d240def0836) is 8.0M, max 75.4M, 67.4M free. Feb 12 20:22:19.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:19.960604 systemd-modules-load[309]: Inserted module 'overlay' Feb 12 20:22:20.007584 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 12 20:22:20.012426 systemd-modules-load[309]: Inserted module 'br_netfilter' Feb 12 20:22:20.016429 kernel: Bridge firewalling registered Feb 12 20:22:20.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.025435 kernel: audit: type=1130 audit(1707769340.014:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.025467 systemd[1]: Started systemd-journald.service. Feb 12 20:22:20.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.038414 systemd[1]: Finished systemd-vconsole-setup.service. Feb 12 20:22:20.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.060512 kernel: audit: type=1130 audit(1707769340.036:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.060595 kernel: audit: type=1130 audit(1707769340.051:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.063586 kernel: SCSI subsystem initialized Feb 12 20:22:20.067894 systemd[1]: Starting dracut-cmdline-ask.service... Feb 12 20:22:20.078966 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:22:20.099597 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 12 20:22:20.099672 kernel: device-mapper: uevent: version 1.0.3 Feb 12 20:22:20.099701 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 12 20:22:20.107064 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:22:20.113000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.119705 systemd-resolved[310]: Positive Trust Anchors: Feb 12 20:22:20.119720 systemd-resolved[310]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:22:20.119778 systemd-resolved[310]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:22:20.125946 kernel: audit: type=1130 audit(1707769340.113:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.143181 systemd-modules-load[309]: Inserted module 'dm_multipath' Feb 12 20:22:20.146478 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:22:20.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.151095 systemd[1]: Finished dracut-cmdline-ask.service. Feb 12 20:22:20.162612 kernel: audit: type=1130 audit(1707769340.149:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.170185 systemd[1]: Starting dracut-cmdline.service... Feb 12 20:22:20.173172 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:22:20.181583 kernel: audit: type=1130 audit(1707769340.161:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.200780 dracut-cmdline[328]: dracut-dracut-053 Feb 12 20:22:20.203866 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:22:20.217716 kernel: audit: type=1130 audit(1707769340.205:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.218763 dracut-cmdline[328]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=0a07ee1673be713cb46dc1305004c8854c4690dc8835a87e3bc71aa6c6a62e40 Feb 12 20:22:20.333577 kernel: Loading iSCSI transport class v2.0-870. Feb 12 20:22:20.347584 kernel: iscsi: registered transport (tcp) Feb 12 20:22:20.371836 kernel: iscsi: registered transport (qla4xxx) Feb 12 20:22:20.371915 kernel: QLogic iSCSI HBA Driver Feb 12 20:22:20.582381 systemd-resolved[310]: Defaulting to hostname 'linux'. Feb 12 20:22:20.585584 kernel: random: crng init done Feb 12 20:22:20.585608 systemd[1]: Started systemd-resolved.service. Feb 12 20:22:20.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.589619 systemd[1]: Reached target nss-lookup.target. Feb 12 20:22:20.602276 kernel: audit: type=1130 audit(1707769340.587:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.617254 systemd[1]: Finished dracut-cmdline.service. Feb 12 20:22:20.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:20.622830 systemd[1]: Starting dracut-pre-udev.service... Feb 12 20:22:20.689595 kernel: raid6: neonx8 gen() 6421 MB/s Feb 12 20:22:20.707576 kernel: raid6: neonx8 xor() 4587 MB/s Feb 12 20:22:20.725572 kernel: raid6: neonx4 gen() 6583 MB/s Feb 12 20:22:20.743574 kernel: raid6: neonx4 xor() 4686 MB/s Feb 12 20:22:20.761572 kernel: raid6: neonx2 gen() 5800 MB/s Feb 12 20:22:20.779574 kernel: raid6: neonx2 xor() 4379 MB/s Feb 12 20:22:20.797572 kernel: raid6: neonx1 gen() 4500 MB/s Feb 12 20:22:20.815573 kernel: raid6: neonx1 xor() 3588 MB/s Feb 12 20:22:20.833571 kernel: raid6: int64x8 gen() 3443 MB/s Feb 12 20:22:20.851573 kernel: raid6: int64x8 xor() 2058 MB/s Feb 12 20:22:20.869572 kernel: raid6: int64x4 gen() 3842 MB/s Feb 12 20:22:20.887573 kernel: raid6: int64x4 xor() 2173 MB/s Feb 12 20:22:20.905572 kernel: raid6: int64x2 gen() 3615 MB/s Feb 12 20:22:20.923573 kernel: raid6: int64x2 xor() 1931 MB/s Feb 12 20:22:20.941572 kernel: raid6: int64x1 gen() 2761 MB/s Feb 12 20:22:20.961088 kernel: raid6: int64x1 xor() 1407 MB/s Feb 12 20:22:20.961117 kernel: raid6: using algorithm neonx4 gen() 6583 MB/s Feb 12 20:22:20.961140 kernel: raid6: .... xor() 4686 MB/s, rmw enabled Feb 12 20:22:20.962900 kernel: raid6: using neon recovery algorithm Feb 12 20:22:20.981580 kernel: xor: measuring software checksum speed Feb 12 20:22:20.983574 kernel: 8regs : 9332 MB/sec Feb 12 20:22:20.986573 kernel: 32regs : 11107 MB/sec Feb 12 20:22:20.990492 kernel: arm64_neon : 9478 MB/sec Feb 12 20:22:20.990524 kernel: xor: using function: 32regs (11107 MB/sec) Feb 12 20:22:21.080602 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Feb 12 20:22:21.097512 systemd[1]: Finished dracut-pre-udev.service. Feb 12 20:22:21.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:21.101000 audit: BPF prog-id=7 op=LOAD Feb 12 20:22:21.101000 audit: BPF prog-id=8 op=LOAD Feb 12 20:22:21.104247 systemd[1]: Starting systemd-udevd.service... Feb 12 20:22:21.133785 systemd-udevd[508]: Using default interface naming scheme 'v252'. Feb 12 20:22:21.142909 systemd[1]: Started systemd-udevd.service. Feb 12 20:22:21.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:21.159018 systemd[1]: Starting dracut-pre-trigger.service... Feb 12 20:22:21.186483 dracut-pre-trigger[529]: rd.md=0: removing MD RAID activation Feb 12 20:22:21.245409 systemd[1]: Finished dracut-pre-trigger.service. Feb 12 20:22:21.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:21.250811 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:22:21.367495 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:22:21.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:21.467145 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Feb 12 20:22:21.467203 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Feb 12 20:22:21.479331 kernel: ena 0000:00:05.0: ENA device version: 0.10 Feb 12 20:22:21.479657 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Feb 12 20:22:21.490577 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:54:7a:2f:46:b1 Feb 12 20:22:21.492676 (udev-worker)[576]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:22:21.506595 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Feb 12 20:22:21.510715 kernel: nvme nvme0: pci function 0000:00:04.0 Feb 12 20:22:21.519578 kernel: nvme nvme0: 2/0/0 default/read/poll queues Feb 12 20:22:21.525576 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 12 20:22:21.525619 kernel: GPT:9289727 != 16777215 Feb 12 20:22:21.527842 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 12 20:22:21.529180 kernel: GPT:9289727 != 16777215 Feb 12 20:22:21.531101 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 12 20:22:21.532664 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:22:21.596583 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (571) Feb 12 20:22:21.617308 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 12 20:22:21.663109 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 12 20:22:21.697519 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 12 20:22:21.705338 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 12 20:22:21.718064 systemd[1]: Starting disk-uuid.service... Feb 12 20:22:21.733191 disk-uuid[674]: Primary Header is updated. Feb 12 20:22:21.733191 disk-uuid[674]: Secondary Entries is updated. Feb 12 20:22:21.733191 disk-uuid[674]: Secondary Header is updated. Feb 12 20:22:21.759959 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 12 20:22:21.768568 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:22:22.769575 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Feb 12 20:22:22.770324 disk-uuid[675]: The operation has completed successfully. Feb 12 20:22:22.931715 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 12 20:22:22.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:22.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:22.931927 systemd[1]: Finished disk-uuid.service. Feb 12 20:22:22.967197 systemd[1]: Starting verity-setup.service... Feb 12 20:22:23.001587 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Feb 12 20:22:23.075947 systemd[1]: Found device dev-mapper-usr.device. Feb 12 20:22:23.082422 systemd[1]: Mounting sysusr-usr.mount... Feb 12 20:22:23.089450 systemd[1]: Finished verity-setup.service. Feb 12 20:22:23.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:23.171612 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 12 20:22:23.172313 systemd[1]: Mounted sysusr-usr.mount. Feb 12 20:22:23.172665 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 12 20:22:23.173901 systemd[1]: Starting ignition-setup.service... Feb 12 20:22:23.178529 systemd[1]: Starting parse-ip-for-networkd.service... Feb 12 20:22:23.210580 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 12 20:22:23.210645 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 20:22:23.210669 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 20:22:23.220580 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 20:22:23.237981 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 12 20:22:23.259212 systemd[1]: Finished ignition-setup.service. Feb 12 20:22:23.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:23.260842 systemd[1]: Starting ignition-fetch-offline.service... Feb 12 20:22:23.334831 systemd[1]: Finished parse-ip-for-networkd.service. Feb 12 20:22:23.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:23.341000 audit: BPF prog-id=9 op=LOAD Feb 12 20:22:23.345010 systemd[1]: Starting systemd-networkd.service... Feb 12 20:22:23.392281 systemd-networkd[1187]: lo: Link UP Feb 12 20:22:23.392304 systemd-networkd[1187]: lo: Gained carrier Feb 12 20:22:23.396715 systemd-networkd[1187]: Enumeration completed Feb 12 20:22:23.396871 systemd[1]: Started systemd-networkd.service. Feb 12 20:22:23.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:23.402766 systemd[1]: Reached target network.target. Feb 12 20:22:23.407639 systemd[1]: Starting iscsiuio.service... Feb 12 20:22:23.408255 systemd-networkd[1187]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:22:23.421482 systemd[1]: Started iscsiuio.service. Feb 12 20:22:23.423000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:23.426261 systemd[1]: Starting iscsid.service... Feb 12 20:22:23.434801 iscsid[1192]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:22:23.434801 iscsid[1192]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 12 20:22:23.434801 iscsid[1192]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 12 20:22:23.434801 iscsid[1192]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 12 20:22:23.434801 iscsid[1192]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 12 20:22:23.434801 iscsid[1192]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 12 20:22:23.463166 systemd-networkd[1187]: eth0: Link UP Feb 12 20:22:23.463189 systemd-networkd[1187]: eth0: Gained carrier Feb 12 20:22:23.470215 systemd[1]: Started iscsid.service. Feb 12 20:22:23.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:23.484838 systemd[1]: Starting dracut-initqueue.service... Feb 12 20:22:23.495708 systemd-networkd[1187]: eth0: DHCPv4 address 172.31.16.195/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 20:22:23.513701 systemd[1]: Finished dracut-initqueue.service. Feb 12 20:22:23.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:23.516148 systemd[1]: Reached target remote-fs-pre.target. Feb 12 20:22:23.521803 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:22:23.523965 systemd[1]: Reached target remote-fs.target. Feb 12 20:22:23.541905 systemd[1]: Starting dracut-pre-mount.service... Feb 12 20:22:23.561337 systemd[1]: Finished dracut-pre-mount.service. Feb 12 20:22:23.563000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:23.801913 ignition[1125]: Ignition 2.14.0 Feb 12 20:22:23.801939 ignition[1125]: Stage: fetch-offline Feb 12 20:22:23.802238 ignition[1125]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:22:23.802298 ignition[1125]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:22:23.822860 ignition[1125]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:22:23.826178 ignition[1125]: Ignition finished successfully Feb 12 20:22:23.828859 systemd[1]: Finished ignition-fetch-offline.service. Feb 12 20:22:23.842469 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 12 20:22:23.843025 kernel: audit: type=1130 audit(1707769343.832:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:23.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:23.834442 systemd[1]: Starting ignition-fetch.service... Feb 12 20:22:23.853676 ignition[1211]: Ignition 2.14.0 Feb 12 20:22:23.853705 ignition[1211]: Stage: fetch Feb 12 20:22:23.854008 ignition[1211]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:22:23.854066 ignition[1211]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:22:23.870181 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:22:23.873103 ignition[1211]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:22:23.888954 ignition[1211]: INFO : PUT result: OK Feb 12 20:22:23.893711 ignition[1211]: DEBUG : parsed url from cmdline: "" Feb 12 20:22:23.896059 ignition[1211]: INFO : no config URL provided Feb 12 20:22:23.896059 ignition[1211]: INFO : reading system config file "/usr/lib/ignition/user.ign" Feb 12 20:22:23.896059 ignition[1211]: INFO : no config at "/usr/lib/ignition/user.ign" Feb 12 20:22:23.896059 ignition[1211]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:22:23.907236 ignition[1211]: INFO : PUT result: OK Feb 12 20:22:23.909349 ignition[1211]: INFO : GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Feb 12 20:22:23.913375 ignition[1211]: INFO : GET result: OK Feb 12 20:22:23.916469 ignition[1211]: DEBUG : parsing config with SHA512: 53194df33e2ed421ce6d1d8ae4a7220fd01ea8be455aad0c6a6d931b48e7ee25d24b59600e73f09008e758a3aac48d8654f3406c903d3d2c28bb0e4295ca1db0 Feb 12 20:22:23.982683 unknown[1211]: fetched base config from "system" Feb 12 20:22:23.984931 unknown[1211]: fetched base config from "system" Feb 12 20:22:23.987010 unknown[1211]: fetched user config from "aws" Feb 12 20:22:23.990423 ignition[1211]: fetch: fetch complete Feb 12 20:22:23.992340 ignition[1211]: fetch: fetch passed Feb 12 20:22:23.993675 ignition[1211]: Ignition finished successfully Feb 12 20:22:23.996370 systemd[1]: Finished ignition-fetch.service. Feb 12 20:22:24.013887 kernel: audit: type=1130 audit(1707769343.997:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:23.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:24.001794 systemd[1]: Starting ignition-kargs.service... Feb 12 20:22:24.029068 ignition[1217]: Ignition 2.14.0 Feb 12 20:22:24.029098 ignition[1217]: Stage: kargs Feb 12 20:22:24.029399 ignition[1217]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:22:24.029458 ignition[1217]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:22:24.041882 ignition[1217]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:22:24.045009 ignition[1217]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:22:24.049236 ignition[1217]: INFO : PUT result: OK Feb 12 20:22:24.052492 ignition[1217]: kargs: kargs passed Feb 12 20:22:24.052636 ignition[1217]: Ignition finished successfully Feb 12 20:22:24.059501 systemd[1]: Finished ignition-kargs.service. Feb 12 20:22:24.072137 kernel: audit: type=1130 audit(1707769344.058:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:24.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:24.061153 systemd[1]: Starting ignition-disks.service... Feb 12 20:22:24.081020 ignition[1223]: Ignition 2.14.0 Feb 12 20:22:24.082997 ignition[1223]: Stage: disks Feb 12 20:22:24.084784 ignition[1223]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:22:24.087643 ignition[1223]: parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:22:24.099269 ignition[1223]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:22:24.102096 ignition[1223]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:22:24.105651 ignition[1223]: INFO : PUT result: OK Feb 12 20:22:24.110921 ignition[1223]: disks: disks passed Feb 12 20:22:24.111024 ignition[1223]: Ignition finished successfully Feb 12 20:22:24.116053 systemd[1]: Finished ignition-disks.service. Feb 12 20:22:24.128996 kernel: audit: type=1130 audit(1707769344.118:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:24.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:24.119887 systemd[1]: Reached target initrd-root-device.target. Feb 12 20:22:24.129000 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:22:24.131093 systemd[1]: Reached target local-fs.target. Feb 12 20:22:24.134858 systemd[1]: Reached target sysinit.target. Feb 12 20:22:24.136850 systemd[1]: Reached target basic.target. Feb 12 20:22:24.140199 systemd[1]: Starting systemd-fsck-root.service... Feb 12 20:22:24.185005 systemd-fsck[1231]: ROOT: clean, 602/553520 files, 56014/553472 blocks Feb 12 20:22:24.193234 systemd[1]: Finished systemd-fsck-root.service. Feb 12 20:22:24.211656 kernel: audit: type=1130 audit(1707769344.195:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:24.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:24.198515 systemd[1]: Mounting sysroot.mount... Feb 12 20:22:24.229602 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 12 20:22:24.232389 systemd[1]: Mounted sysroot.mount. Feb 12 20:22:24.236157 systemd[1]: Reached target initrd-root-fs.target. Feb 12 20:22:24.251330 systemd[1]: Mounting sysroot-usr.mount... Feb 12 20:22:24.259600 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Feb 12 20:22:24.259697 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 12 20:22:24.263021 systemd[1]: Reached target ignition-diskful.target. Feb 12 20:22:24.273709 systemd[1]: Mounted sysroot-usr.mount. Feb 12 20:22:24.293320 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:22:24.302898 systemd[1]: Starting initrd-setup-root.service... Feb 12 20:22:24.315600 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1248) Feb 12 20:22:24.323672 initrd-setup-root[1253]: cut: /sysroot/etc/passwd: No such file or directory Feb 12 20:22:24.329494 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 12 20:22:24.329533 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 20:22:24.329579 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 20:22:24.337653 initrd-setup-root[1279]: cut: /sysroot/etc/group: No such file or directory Feb 12 20:22:24.340570 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 20:22:24.346208 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:22:24.352178 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Feb 12 20:22:24.361585 initrd-setup-root[1295]: cut: /sysroot/etc/gshadow: No such file or directory Feb 12 20:22:24.546036 systemd[1]: Finished initrd-setup-root.service. Feb 12 20:22:24.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:24.550184 systemd[1]: Starting ignition-mount.service... Feb 12 20:22:24.560957 kernel: audit: type=1130 audit(1707769344.547:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:24.564129 systemd[1]: Starting sysroot-boot.service... Feb 12 20:22:24.573420 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 12 20:22:24.575907 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 12 20:22:24.604939 ignition[1314]: INFO : Ignition 2.14.0 Feb 12 20:22:24.604939 ignition[1314]: INFO : Stage: mount Feb 12 20:22:24.612311 ignition[1314]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:22:24.612311 ignition[1314]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:22:24.621638 systemd[1]: Finished sysroot-boot.service. Feb 12 20:22:24.631216 ignition[1314]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:22:24.634203 ignition[1314]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:22:24.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:24.645378 ignition[1314]: INFO : PUT result: OK Feb 12 20:22:24.647257 kernel: audit: type=1130 audit(1707769344.633:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:24.651368 ignition[1314]: INFO : mount: mount passed Feb 12 20:22:24.653670 ignition[1314]: INFO : Ignition finished successfully Feb 12 20:22:24.657185 systemd[1]: Finished ignition-mount.service. Feb 12 20:22:24.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:24.662357 systemd[1]: Starting ignition-files.service... Feb 12 20:22:24.672663 kernel: audit: type=1130 audit(1707769344.659:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:24.679751 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 12 20:22:24.698592 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/nvme0n1p6 scanned by mount (1323) Feb 12 20:22:24.704800 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Feb 12 20:22:24.704853 kernel: BTRFS info (device nvme0n1p6): using free space tree Feb 12 20:22:24.704877 kernel: BTRFS info (device nvme0n1p6): has skinny extents Feb 12 20:22:24.713577 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Feb 12 20:22:24.718244 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 12 20:22:24.737200 ignition[1342]: INFO : Ignition 2.14.0 Feb 12 20:22:24.740829 ignition[1342]: INFO : Stage: files Feb 12 20:22:24.740829 ignition[1342]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:22:24.740829 ignition[1342]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:22:24.756954 ignition[1342]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:22:24.760133 ignition[1342]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:22:24.766636 ignition[1342]: INFO : PUT result: OK Feb 12 20:22:24.772728 ignition[1342]: DEBUG : files: compiled without relabeling support, skipping Feb 12 20:22:24.777250 ignition[1342]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 12 20:22:24.777250 ignition[1342]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 12 20:22:24.824139 ignition[1342]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 12 20:22:24.827961 ignition[1342]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 12 20:22:24.832615 unknown[1342]: wrote ssh authorized keys file for user: core Feb 12 20:22:24.835253 ignition[1342]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 12 20:22:24.839886 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 20:22:24.844211 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 12 20:22:24.844211 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 20:22:24.844211 ignition[1342]: INFO : GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Feb 12 20:22:24.943107 ignition[1342]: INFO : GET result: OK Feb 12 20:22:25.021345 systemd-networkd[1187]: eth0: Gained IPv6LL Feb 12 20:22:25.069967 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Feb 12 20:22:25.074724 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 20:22:25.074724 ignition[1342]: INFO : GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Feb 12 20:22:25.504260 ignition[1342]: INFO : GET result: OK Feb 12 20:22:25.779457 ignition[1342]: DEBUG : file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Feb 12 20:22:25.785801 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Feb 12 20:22:25.785801 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 20:22:25.785801 ignition[1342]: INFO : GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Feb 12 20:22:26.176029 ignition[1342]: INFO : GET result: OK Feb 12 20:22:26.560706 ignition[1342]: DEBUG : file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Feb 12 20:22:26.567826 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Feb 12 20:22:26.567826 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:22:26.567826 ignition[1342]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Feb 12 20:22:26.699654 ignition[1342]: INFO : GET result: OK Feb 12 20:22:28.203896 ignition[1342]: DEBUG : file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Feb 12 20:22:28.213700 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 12 20:22:28.213700 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 20:22:28.213700 ignition[1342]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:22:28.233370 ignition[1342]: INFO : op(1): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2516108166" Feb 12 20:22:28.243482 ignition[1342]: CRITICAL : op(1): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2516108166": device or resource busy Feb 12 20:22:28.243482 ignition[1342]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2516108166", trying btrfs: device or resource busy Feb 12 20:22:28.243482 ignition[1342]: INFO : op(2): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2516108166" Feb 12 20:22:28.256701 kernel: BTRFS info: devid 1 device path /dev/nvme0n1p6 changed to /dev/disk/by-label/OEM scanned by ignition (1345) Feb 12 20:22:28.256742 ignition[1342]: INFO : op(2): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2516108166" Feb 12 20:22:28.261634 ignition[1342]: INFO : op(3): [started] unmounting "/mnt/oem2516108166" Feb 12 20:22:28.261634 ignition[1342]: INFO : op(3): [finished] unmounting "/mnt/oem2516108166" Feb 12 20:22:28.261634 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/eks/bootstrap.sh" Feb 12 20:22:28.261634 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:22:28.261634 ignition[1342]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Feb 12 20:22:28.284069 systemd[1]: mnt-oem2516108166.mount: Deactivated successfully. Feb 12 20:22:28.324639 ignition[1342]: INFO : GET result: OK Feb 12 20:22:28.862364 ignition[1342]: DEBUG : file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Feb 12 20:22:28.879939 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 12 20:22:28.879939 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:22:28.879939 ignition[1342]: INFO : GET https://dl.k8s.io/release/v1.26.5/bin/linux/arm64/kubectl: attempt #1 Feb 12 20:22:28.924395 ignition[1342]: INFO : GET result: OK Feb 12 20:22:29.572290 ignition[1342]: DEBUG : file matches expected sum of: 3672fda0beebbbd636a2088f427463cbad32683ea4fbb1df61650552e63846b6a47db803ccb70c3db0a8f24746a23a5632bdc15a3fb78f4f7d833e7f86763c2a Feb 12 20:22:29.578642 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 12 20:22:29.578642 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:22:29.578642 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 12 20:22:29.578642 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/install.sh" Feb 12 20:22:29.597080 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/install.sh" Feb 12 20:22:29.601713 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:22:29.606302 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 12 20:22:29.610676 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:22:29.616437 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 12 20:22:29.621339 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:22:29.626276 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 12 20:22:29.631422 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:22:29.635774 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 12 20:22:29.640141 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 20:22:29.645001 ignition[1342]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:22:29.659674 ignition[1342]: INFO : op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3102815195" Feb 12 20:22:29.663664 ignition[1342]: CRITICAL : op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3102815195": device or resource busy Feb 12 20:22:29.663664 ignition[1342]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3102815195", trying btrfs: device or resource busy Feb 12 20:22:29.663664 ignition[1342]: INFO : op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3102815195" Feb 12 20:22:29.675970 ignition[1342]: INFO : op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3102815195" Feb 12 20:22:29.687589 ignition[1342]: INFO : op(6): [started] unmounting "/mnt/oem3102815195" Feb 12 20:22:29.690754 ignition[1342]: INFO : op(6): [finished] unmounting "/mnt/oem3102815195" Feb 12 20:22:29.690754 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 12 20:22:29.690754 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 20:22:29.708584 ignition[1342]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:22:29.691230 systemd[1]: mnt-oem3102815195.mount: Deactivated successfully. Feb 12 20:22:29.728111 ignition[1342]: INFO : op(7): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2653069381" Feb 12 20:22:29.731652 ignition[1342]: CRITICAL : op(7): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2653069381": device or resource busy Feb 12 20:22:29.731652 ignition[1342]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2653069381", trying btrfs: device or resource busy Feb 12 20:22:29.731652 ignition[1342]: INFO : op(8): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2653069381" Feb 12 20:22:29.745798 ignition[1342]: INFO : op(8): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2653069381" Feb 12 20:22:29.745798 ignition[1342]: INFO : op(9): [started] unmounting "/mnt/oem2653069381" Feb 12 20:22:29.745798 ignition[1342]: INFO : op(9): [finished] unmounting "/mnt/oem2653069381" Feb 12 20:22:29.745798 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/etc/amazon/ssm/amazon-ssm-agent.json" Feb 12 20:22:29.745798 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 20:22:29.745798 ignition[1342]: INFO : oem config not found in "/usr/share/oem", looking on oem partition Feb 12 20:22:29.754161 systemd[1]: mnt-oem2653069381.mount: Deactivated successfully. Feb 12 20:22:29.797944 ignition[1342]: INFO : op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2815730123" Feb 12 20:22:29.808452 ignition[1342]: CRITICAL : op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2815730123": device or resource busy Feb 12 20:22:29.808452 ignition[1342]: ERROR : failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2815730123", trying btrfs: device or resource busy Feb 12 20:22:29.808452 ignition[1342]: INFO : op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2815730123" Feb 12 20:22:29.837100 ignition[1342]: INFO : op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2815730123" Feb 12 20:22:29.837100 ignition[1342]: INFO : op(c): [started] unmounting "/mnt/oem2815730123" Feb 12 20:22:29.837100 ignition[1342]: INFO : op(c): [finished] unmounting "/mnt/oem2815730123" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/amazon/ssm/seelog.xml" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(14): [started] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(14): [finished] processing unit "coreos-metadata-sshkeys@.service" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(15): [started] processing unit "amazon-ssm-agent.service" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(15): op(16): [started] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(15): op(16): [finished] writing unit "amazon-ssm-agent.service" at "/sysroot/etc/systemd/system/amazon-ssm-agent.service" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(15): [finished] processing unit "amazon-ssm-agent.service" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(17): [started] processing unit "nvidia.service" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(17): [finished] processing unit "nvidia.service" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(18): [started] processing unit "containerd.service" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(18): op(19): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(18): op(19): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(18): [finished] processing unit "containerd.service" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 12 20:22:29.837100 ignition[1342]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 12 20:22:29.965468 kernel: audit: type=1130 audit(1707769349.855:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:29.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:29.850745 systemd[1]: Finished ignition-files.service. Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(20): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(20): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(21): [started] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(21): [finished] setting preset to enabled for "amazon-ssm-agent.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(22): [started] setting preset to enabled for "nvidia.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(22): [finished] setting preset to enabled for "nvidia.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(24): [started] setting preset to enabled for "prepare-critools.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-critools.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(25): [started] setting preset to enabled for "prepare-helm.service" Feb 12 20:22:29.967634 ignition[1342]: INFO : files: op(25): [finished] setting preset to enabled for "prepare-helm.service" Feb 12 20:22:30.085331 kernel: audit: type=1130 audit(1707769349.985:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.085371 kernel: audit: type=1130 audit(1707769350.050:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.085397 kernel: audit: type=1131 audit(1707769350.050:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:29.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.050000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:29.877893 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 12 20:22:30.088090 ignition[1342]: INFO : files: createResultFile: createFiles: op(26): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:22:30.088090 ignition[1342]: INFO : files: createResultFile: createFiles: op(26): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 12 20:22:30.088090 ignition[1342]: INFO : files: files passed Feb 12 20:22:30.088090 ignition[1342]: INFO : Ignition finished successfully Feb 12 20:22:29.884172 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 12 20:22:30.105262 initrd-setup-root-after-ignition[1367]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 12 20:22:29.949879 systemd[1]: Starting ignition-quench.service... Feb 12 20:22:29.982872 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 12 20:22:29.996702 systemd[1]: Reached target ignition-complete.target. Feb 12 20:22:30.000681 systemd[1]: Starting initrd-parse-etc.service... Feb 12 20:22:30.013510 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 12 20:22:30.013771 systemd[1]: Finished ignition-quench.service. Feb 12 20:22:30.135494 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 12 20:22:30.138104 systemd[1]: Finished initrd-parse-etc.service. Feb 12 20:22:30.140617 systemd[1]: Reached target initrd-fs.target. Feb 12 20:22:30.143084 systemd[1]: Reached target initrd.target. Feb 12 20:22:30.145122 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 12 20:22:30.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.146872 systemd[1]: Starting dracut-pre-pivot.service... Feb 12 20:22:30.178428 kernel: audit: type=1130 audit(1707769350.137:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.178472 kernel: audit: type=1131 audit(1707769350.137:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.184593 systemd[1]: Finished dracut-pre-pivot.service. Feb 12 20:22:30.189959 systemd[1]: Starting initrd-cleanup.service... Feb 12 20:22:30.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.209804 kernel: audit: type=1130 audit(1707769350.186:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.210887 systemd[1]: Stopped target nss-lookup.target. Feb 12 20:22:30.215090 systemd[1]: Stopped target remote-cryptsetup.target. Feb 12 20:22:30.219340 systemd[1]: Stopped target timers.target. Feb 12 20:22:30.223095 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 12 20:22:30.225636 systemd[1]: Stopped dracut-pre-pivot.service. Feb 12 20:22:30.229826 systemd[1]: Stopped target initrd.target. Feb 12 20:22:30.240417 kernel: audit: type=1131 audit(1707769350.228:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.240524 systemd[1]: Stopped target basic.target. Feb 12 20:22:30.244078 systemd[1]: Stopped target ignition-complete.target. Feb 12 20:22:30.248331 systemd[1]: Stopped target ignition-diskful.target. Feb 12 20:22:30.252682 systemd[1]: Stopped target initrd-root-device.target. Feb 12 20:22:30.257121 systemd[1]: Stopped target remote-fs.target. Feb 12 20:22:30.260994 systemd[1]: Stopped target remote-fs-pre.target. Feb 12 20:22:30.265096 systemd[1]: Stopped target sysinit.target. Feb 12 20:22:30.268925 systemd[1]: Stopped target local-fs.target. Feb 12 20:22:30.272806 systemd[1]: Stopped target local-fs-pre.target. Feb 12 20:22:30.276896 systemd[1]: Stopped target swap.target. Feb 12 20:22:30.280484 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 12 20:22:30.283076 systemd[1]: Stopped dracut-pre-mount.service. Feb 12 20:22:30.285000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.287248 systemd[1]: Stopped target cryptsetup.target. Feb 12 20:22:30.310306 kernel: audit: type=1131 audit(1707769350.285:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.310346 kernel: audit: type=1131 audit(1707769350.298:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.298000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.298082 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 12 20:22:30.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.298292 systemd[1]: Stopped dracut-initqueue.service. Feb 12 20:22:30.300607 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 12 20:22:30.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.300816 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 12 20:22:30.310522 systemd[1]: ignition-files.service: Deactivated successfully. Feb 12 20:22:30.310886 systemd[1]: Stopped ignition-files.service. Feb 12 20:22:30.317745 systemd[1]: Stopping ignition-mount.service... Feb 12 20:22:30.330758 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 12 20:22:30.331207 systemd[1]: Stopped kmod-static-nodes.service. Feb 12 20:22:30.338922 systemd[1]: Stopping sysroot-boot.service... Feb 12 20:22:30.344362 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 12 20:22:30.352021 systemd[1]: Stopped systemd-udev-trigger.service. Feb 12 20:22:30.356252 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 12 20:22:30.356741 systemd[1]: Stopped dracut-pre-trigger.service. Feb 12 20:22:30.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.369587 ignition[1381]: INFO : Ignition 2.14.0 Feb 12 20:22:30.369587 ignition[1381]: INFO : Stage: umount Feb 12 20:22:30.369587 ignition[1381]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 12 20:22:30.369587 ignition[1381]: DEBUG : parsing config with SHA512: 6629d8e825d60c9c9d4629d8547ef9a0b839d6b01b7f61a481a1f23308c924b8b0bbf10cae7f7fe3bcaf88b23d1a81baa7771c3670728d4d2a1e665216a1de7b Feb 12 20:22:30.383360 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 12 20:22:30.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.383587 systemd[1]: Finished initrd-cleanup.service. Feb 12 20:22:30.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.401456 ignition[1381]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Feb 12 20:22:30.404600 ignition[1381]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Feb 12 20:22:30.420502 ignition[1381]: INFO : PUT result: OK Feb 12 20:22:30.426397 ignition[1381]: INFO : umount: umount passed Feb 12 20:22:30.429339 ignition[1381]: INFO : Ignition finished successfully Feb 12 20:22:30.431984 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 12 20:22:30.434280 systemd[1]: Stopped ignition-mount.service. Feb 12 20:22:30.438336 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 12 20:22:30.440813 systemd[1]: Stopped sysroot-boot.service. Feb 12 20:22:30.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.444583 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 12 20:22:30.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.444686 systemd[1]: Stopped ignition-disks.service. Feb 12 20:22:30.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.450844 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 12 20:22:30.450971 systemd[1]: Stopped ignition-kargs.service. Feb 12 20:22:30.455000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.457182 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 12 20:22:30.457280 systemd[1]: Stopped ignition-fetch.service. Feb 12 20:22:30.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.463420 systemd[1]: Stopped target network.target. Feb 12 20:22:30.467048 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 12 20:22:30.467157 systemd[1]: Stopped ignition-fetch-offline.service. Feb 12 20:22:30.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.474043 systemd[1]: Stopped target paths.target. Feb 12 20:22:30.474145 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 12 20:22:30.484604 systemd[1]: Stopped systemd-ask-password-console.path. Feb 12 20:22:30.489036 systemd[1]: Stopped target slices.target. Feb 12 20:22:30.492740 systemd[1]: Stopped target sockets.target. Feb 12 20:22:30.496411 systemd[1]: iscsid.socket: Deactivated successfully. Feb 12 20:22:30.496501 systemd[1]: Closed iscsid.socket. Feb 12 20:22:30.501918 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 12 20:22:30.502006 systemd[1]: Closed iscsiuio.socket. Feb 12 20:22:30.507572 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 12 20:22:30.507705 systemd[1]: Stopped ignition-setup.service. Feb 12 20:22:30.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.514061 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 12 20:22:30.514155 systemd[1]: Stopped initrd-setup-root.service. Feb 12 20:22:30.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.521301 systemd[1]: Stopping systemd-networkd.service... Feb 12 20:22:30.525342 systemd[1]: Stopping systemd-resolved.service... Feb 12 20:22:30.530639 systemd-networkd[1187]: eth0: DHCPv6 lease lost Feb 12 20:22:30.536191 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 12 20:22:30.539110 systemd[1]: Stopped systemd-resolved.service. Feb 12 20:22:30.542000 audit: BPF prog-id=6 op=UNLOAD Feb 12 20:22:30.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.543879 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 12 20:22:30.546499 systemd[1]: Stopped systemd-networkd.service. Feb 12 20:22:30.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.550844 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 12 20:22:30.550960 systemd[1]: Closed systemd-networkd.socket. Feb 12 20:22:30.554000 audit: BPF prog-id=9 op=UNLOAD Feb 12 20:22:30.558669 systemd[1]: Stopping network-cleanup.service... Feb 12 20:22:30.563694 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 12 20:22:30.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.571000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.563820 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 12 20:22:30.568011 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 12 20:22:30.568108 systemd[1]: Stopped systemd-sysctl.service. Feb 12 20:22:30.570359 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 12 20:22:30.570450 systemd[1]: Stopped systemd-modules-load.service. Feb 12 20:22:30.575334 systemd[1]: Stopping systemd-udevd.service... Feb 12 20:22:30.597212 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 12 20:22:30.599703 systemd[1]: Stopped systemd-udevd.service. Feb 12 20:22:30.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.604075 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 12 20:22:30.606734 systemd[1]: Stopped network-cleanup.service. Feb 12 20:22:30.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.611109 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 12 20:22:30.611240 systemd[1]: Closed systemd-udevd-control.socket. Feb 12 20:22:30.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.615934 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 12 20:22:30.616023 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 12 20:22:30.618302 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 12 20:22:30.618412 systemd[1]: Stopped dracut-pre-udev.service. Feb 12 20:22:30.620680 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 12 20:22:30.620793 systemd[1]: Stopped dracut-cmdline.service. Feb 12 20:22:30.622970 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 12 20:22:30.623070 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 12 20:22:30.642463 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 12 20:22:30.664477 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 12 20:22:30.664824 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 12 20:22:30.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.683142 systemd[1]: mnt-oem2815730123.mount: Deactivated successfully. Feb 12 20:22:30.684490 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 12 20:22:30.684666 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 12 20:22:30.691325 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 12 20:22:30.691971 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 12 20:22:30.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:30.703263 systemd[1]: Reached target initrd-switch-root.target. Feb 12 20:22:30.720885 systemd[1]: Starting initrd-switch-root.service... Feb 12 20:22:30.736915 systemd[1]: Switching root. Feb 12 20:22:30.740000 audit: BPF prog-id=5 op=UNLOAD Feb 12 20:22:30.740000 audit: BPF prog-id=4 op=UNLOAD Feb 12 20:22:30.740000 audit: BPF prog-id=3 op=UNLOAD Feb 12 20:22:30.742000 audit: BPF prog-id=8 op=UNLOAD Feb 12 20:22:30.742000 audit: BPF prog-id=7 op=UNLOAD Feb 12 20:22:30.769615 iscsid[1192]: iscsid shutting down. Feb 12 20:22:30.772104 systemd-journald[308]: Received SIGTERM from PID 1 (systemd). Feb 12 20:22:30.772204 systemd-journald[308]: Journal stopped Feb 12 20:22:36.426416 kernel: SELinux: Class mctp_socket not defined in policy. Feb 12 20:22:36.426523 kernel: SELinux: Class anon_inode not defined in policy. Feb 12 20:22:36.426573 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 12 20:22:36.426607 kernel: SELinux: policy capability network_peer_controls=1 Feb 12 20:22:36.426641 kernel: SELinux: policy capability open_perms=1 Feb 12 20:22:36.426672 kernel: SELinux: policy capability extended_socket_class=1 Feb 12 20:22:36.426712 kernel: SELinux: policy capability always_check_network=0 Feb 12 20:22:36.426743 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 12 20:22:36.426776 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 12 20:22:36.426807 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 12 20:22:36.426846 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 12 20:22:36.426879 systemd[1]: Successfully loaded SELinux policy in 89.235ms. Feb 12 20:22:36.426934 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 22.061ms. Feb 12 20:22:36.426970 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 12 20:22:36.427003 systemd[1]: Detected virtualization amazon. Feb 12 20:22:36.427036 systemd[1]: Detected architecture arm64. Feb 12 20:22:36.427066 systemd[1]: Detected first boot. Feb 12 20:22:36.427101 systemd[1]: Initializing machine ID from VM UUID. Feb 12 20:22:36.427132 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 12 20:22:36.427164 systemd[1]: Populated /etc with preset unit settings. Feb 12 20:22:36.427197 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:22:36.427236 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:22:36.427271 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:22:36.427314 systemd[1]: Queued start job for default target multi-user.target. Feb 12 20:22:36.427347 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 12 20:22:36.427385 systemd[1]: Created slice system-addon\x2drun.slice. Feb 12 20:22:36.427439 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Feb 12 20:22:36.427474 systemd[1]: Created slice system-getty.slice. Feb 12 20:22:36.427506 systemd[1]: Created slice system-modprobe.slice. Feb 12 20:22:36.427536 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 12 20:22:36.430646 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 12 20:22:36.430685 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 12 20:22:36.430728 systemd[1]: Created slice user.slice. Feb 12 20:22:36.430760 systemd[1]: Started systemd-ask-password-console.path. Feb 12 20:22:36.430792 systemd[1]: Started systemd-ask-password-wall.path. Feb 12 20:22:36.430825 systemd[1]: Set up automount boot.automount. Feb 12 20:22:36.430858 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 12 20:22:36.430891 systemd[1]: Reached target integritysetup.target. Feb 12 20:22:36.430920 systemd[1]: Reached target remote-cryptsetup.target. Feb 12 20:22:36.430952 systemd[1]: Reached target remote-fs.target. Feb 12 20:22:36.430983 systemd[1]: Reached target slices.target. Feb 12 20:22:36.431025 systemd[1]: Reached target swap.target. Feb 12 20:22:36.431054 systemd[1]: Reached target torcx.target. Feb 12 20:22:36.431086 systemd[1]: Reached target veritysetup.target. Feb 12 20:22:36.431117 systemd[1]: Listening on systemd-coredump.socket. Feb 12 20:22:36.431149 systemd[1]: Listening on systemd-initctl.socket. Feb 12 20:22:36.431181 kernel: kauditd_printk_skb: 46 callbacks suppressed Feb 12 20:22:36.431214 kernel: audit: type=1400 audit(1707769355.988:86): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:22:36.431244 systemd[1]: Listening on systemd-journald-audit.socket. Feb 12 20:22:36.431278 kernel: audit: type=1335 audit(1707769355.988:87): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 20:22:36.431312 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 12 20:22:36.431342 systemd[1]: Listening on systemd-journald.socket. Feb 12 20:22:36.431373 systemd[1]: Listening on systemd-networkd.socket. Feb 12 20:22:36.431421 systemd[1]: Listening on systemd-udevd-control.socket. Feb 12 20:22:36.431456 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 12 20:22:36.431488 systemd[1]: Listening on systemd-userdbd.socket. Feb 12 20:22:36.431519 systemd[1]: Mounting dev-hugepages.mount... Feb 12 20:22:36.432426 systemd[1]: Mounting dev-mqueue.mount... Feb 12 20:22:36.432473 systemd[1]: Mounting media.mount... Feb 12 20:22:36.432506 systemd[1]: Mounting sys-kernel-debug.mount... Feb 12 20:22:36.432535 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 12 20:22:36.432588 systemd[1]: Mounting tmp.mount... Feb 12 20:22:36.432621 systemd[1]: Starting flatcar-tmpfiles.service... Feb 12 20:22:36.432650 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 12 20:22:36.432681 systemd[1]: Starting kmod-static-nodes.service... Feb 12 20:22:36.432711 systemd[1]: Starting modprobe@configfs.service... Feb 12 20:22:36.432743 systemd[1]: Starting modprobe@dm_mod.service... Feb 12 20:22:36.432776 systemd[1]: Starting modprobe@drm.service... Feb 12 20:22:36.432808 systemd[1]: Starting modprobe@efi_pstore.service... Feb 12 20:22:36.432837 systemd[1]: Starting modprobe@fuse.service... Feb 12 20:22:36.432868 systemd[1]: Starting modprobe@loop.service... Feb 12 20:22:36.432902 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 12 20:22:36.432938 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 12 20:22:36.432970 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 12 20:22:36.433000 systemd[1]: Starting systemd-journald.service... Feb 12 20:22:36.433029 systemd[1]: Starting systemd-modules-load.service... Feb 12 20:22:36.433065 kernel: fuse: init (API version 7.34) Feb 12 20:22:36.433095 systemd[1]: Starting systemd-network-generator.service... Feb 12 20:22:36.433124 kernel: loop: module loaded Feb 12 20:22:36.433154 systemd[1]: Starting systemd-remount-fs.service... Feb 12 20:22:36.433183 systemd[1]: Starting systemd-udev-trigger.service... Feb 12 20:22:36.433214 systemd[1]: Mounted dev-hugepages.mount. Feb 12 20:22:36.433243 systemd[1]: Mounted dev-mqueue.mount. Feb 12 20:22:36.433272 systemd[1]: Mounted media.mount. Feb 12 20:22:36.433303 systemd[1]: Mounted sys-kernel-debug.mount. Feb 12 20:22:36.433338 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 12 20:22:36.433370 systemd[1]: Mounted tmp.mount. Feb 12 20:22:36.433400 systemd[1]: Finished kmod-static-nodes.service. Feb 12 20:22:36.433564 kernel: audit: type=1130 audit(1707769356.324:88): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.433601 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 12 20:22:36.433631 systemd[1]: Finished modprobe@configfs.service. Feb 12 20:22:36.433661 kernel: audit: type=1130 audit(1707769356.342:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.433690 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 12 20:22:36.433728 systemd[1]: Finished modprobe@dm_mod.service. Feb 12 20:22:36.433758 kernel: audit: type=1131 audit(1707769356.353:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.433786 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 12 20:22:36.433815 kernel: audit: type=1130 audit(1707769356.374:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.433847 systemd[1]: Finished modprobe@drm.service. Feb 12 20:22:36.433880 kernel: audit: type=1131 audit(1707769356.374:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.433919 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 12 20:22:36.433952 kernel: audit: type=1130 audit(1707769356.398:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.433981 kernel: audit: type=1131 audit(1707769356.398:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.434010 systemd[1]: Finished modprobe@efi_pstore.service. Feb 12 20:22:36.434039 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 12 20:22:36.434068 systemd[1]: Finished modprobe@fuse.service. Feb 12 20:22:36.434101 kernel: audit: type=1305 audit(1707769356.411:95): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:22:36.434132 systemd-journald[1533]: Journal started Feb 12 20:22:36.434231 systemd-journald[1533]: Runtime Journal (/run/log/journal/ec238894b67dc5d92b811d240def0836) is 8.0M, max 75.4M, 67.4M free. Feb 12 20:22:35.988000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 12 20:22:35.988000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Feb 12 20:22:36.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.411000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 12 20:22:36.411000 audit[1533]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=ffffe3d4eb10 a2=4000 a3=1 items=0 ppid=1 pid=1533 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:36.411000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 12 20:22:36.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.445729 systemd[1]: Started systemd-journald.service. Feb 12 20:22:36.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.451220 systemd[1]: Finished flatcar-tmpfiles.service. Feb 12 20:22:36.455730 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 12 20:22:36.456168 systemd[1]: Finished modprobe@loop.service. Feb 12 20:22:36.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.460406 systemd[1]: Finished systemd-modules-load.service. Feb 12 20:22:36.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.467407 systemd[1]: Finished systemd-network-generator.service. Feb 12 20:22:36.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.470841 systemd[1]: Finished systemd-remount-fs.service. Feb 12 20:22:36.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.473938 systemd[1]: Reached target network-pre.target. Feb 12 20:22:36.481261 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 12 20:22:36.486173 systemd[1]: Mounting sys-kernel-config.mount... Feb 12 20:22:36.488120 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 12 20:22:36.491826 systemd[1]: Starting systemd-hwdb-update.service... Feb 12 20:22:36.504011 systemd[1]: Starting systemd-journal-flush.service... Feb 12 20:22:36.506414 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 12 20:22:36.512612 systemd[1]: Starting systemd-random-seed.service... Feb 12 20:22:36.514852 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 12 20:22:36.521302 systemd[1]: Starting systemd-sysctl.service... Feb 12 20:22:36.526358 systemd[1]: Starting systemd-sysusers.service... Feb 12 20:22:36.533018 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 12 20:22:36.536645 systemd[1]: Mounted sys-kernel-config.mount. Feb 12 20:22:36.565340 systemd-journald[1533]: Time spent on flushing to /var/log/journal/ec238894b67dc5d92b811d240def0836 is 96.756ms for 1106 entries. Feb 12 20:22:36.565340 systemd-journald[1533]: System Journal (/var/log/journal/ec238894b67dc5d92b811d240def0836) is 8.0M, max 195.6M, 187.6M free. Feb 12 20:22:36.699668 systemd-journald[1533]: Received client request to flush runtime journal. Feb 12 20:22:36.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.580080 systemd[1]: Finished systemd-random-seed.service. Feb 12 20:22:36.582533 systemd[1]: Reached target first-boot-complete.target. Feb 12 20:22:36.707346 udevadm[1586]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 12 20:22:36.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:36.612928 systemd[1]: Finished systemd-sysctl.service. Feb 12 20:22:36.644272 systemd[1]: Finished systemd-udev-trigger.service. Feb 12 20:22:36.649476 systemd[1]: Starting systemd-udev-settle.service... Feb 12 20:22:36.689227 systemd[1]: Finished systemd-sysusers.service. Feb 12 20:22:36.694195 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 12 20:22:36.707707 systemd[1]: Finished systemd-journal-flush.service. Feb 12 20:22:36.751628 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 12 20:22:36.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:37.505800 systemd[1]: Finished systemd-hwdb-update.service. Feb 12 20:22:37.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:37.510529 systemd[1]: Starting systemd-udevd.service... Feb 12 20:22:37.552797 systemd-udevd[1594]: Using default interface naming scheme 'v252'. Feb 12 20:22:37.604209 systemd[1]: Started systemd-udevd.service. Feb 12 20:22:37.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:37.629639 systemd[1]: Starting systemd-networkd.service... Feb 12 20:22:37.645951 systemd[1]: Starting systemd-userdbd.service... Feb 12 20:22:37.704797 (udev-worker)[1607]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:22:37.735931 systemd[1]: Found device dev-ttyS0.device. Feb 12 20:22:37.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:37.760244 systemd[1]: Started systemd-userdbd.service. Feb 12 20:22:37.926610 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/nvme0n1p6 scanned by (udev-worker) (1601) Feb 12 20:22:37.974498 systemd-networkd[1611]: lo: Link UP Feb 12 20:22:37.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:37.975625 systemd-networkd[1611]: lo: Gained carrier Feb 12 20:22:37.976979 systemd-networkd[1611]: Enumeration completed Feb 12 20:22:37.977199 systemd[1]: Started systemd-networkd.service. Feb 12 20:22:37.983807 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 12 20:22:37.988483 systemd-networkd[1611]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 12 20:22:37.994773 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:22:37.996032 systemd-networkd[1611]: eth0: Link UP Feb 12 20:22:37.996511 systemd-networkd[1611]: eth0: Gained carrier Feb 12 20:22:38.020850 systemd-networkd[1611]: eth0: DHCPv4 address 172.31.16.195/20, gateway 172.31.16.1 acquired from 172.31.16.1 Feb 12 20:22:38.145264 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 12 20:22:38.155947 systemd[1]: Finished systemd-udev-settle.service. Feb 12 20:22:38.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:38.173123 systemd[1]: Starting lvm2-activation-early.service... Feb 12 20:22:38.210778 lvm[1714]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:22:38.252405 systemd[1]: Finished lvm2-activation-early.service. Feb 12 20:22:38.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:38.257637 systemd[1]: Reached target cryptsetup.target. Feb 12 20:22:38.263630 systemd[1]: Starting lvm2-activation.service... Feb 12 20:22:38.271489 lvm[1716]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 12 20:22:38.309690 systemd[1]: Finished lvm2-activation.service. Feb 12 20:22:38.312000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:38.314620 systemd[1]: Reached target local-fs-pre.target. Feb 12 20:22:38.318270 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 12 20:22:38.318526 systemd[1]: Reached target local-fs.target. Feb 12 20:22:38.320816 systemd[1]: Reached target machines.target. Feb 12 20:22:38.325639 systemd[1]: Starting ldconfig.service... Feb 12 20:22:38.340581 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 12 20:22:38.340874 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:22:38.343594 systemd[1]: Starting systemd-boot-update.service... Feb 12 20:22:38.349281 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 12 20:22:38.355694 systemd[1]: Starting systemd-machine-id-commit.service... Feb 12 20:22:38.359021 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:22:38.359201 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 12 20:22:38.362830 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 12 20:22:38.377436 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1719 (bootctl) Feb 12 20:22:38.379896 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 12 20:22:38.409040 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 12 20:22:38.411865 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 12 20:22:38.417200 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 12 20:22:38.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:38.420378 systemd-tmpfiles[1722]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 12 20:22:38.498942 systemd-fsck[1728]: fsck.fat 4.2 (2021-01-31) Feb 12 20:22:38.498942 systemd-fsck[1728]: /dev/nvme0n1p1: 236 files, 113719/258078 clusters Feb 12 20:22:38.502406 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 12 20:22:38.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:38.510742 systemd[1]: Mounting boot.mount... Feb 12 20:22:38.552305 systemd[1]: Mounted boot.mount. Feb 12 20:22:38.586492 systemd[1]: Finished systemd-boot-update.service. Feb 12 20:22:38.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:38.792348 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 12 20:22:38.795000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:38.799962 systemd[1]: Starting audit-rules.service... Feb 12 20:22:38.806620 systemd[1]: Starting clean-ca-certificates.service... Feb 12 20:22:38.813591 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 12 20:22:38.826115 systemd[1]: Starting systemd-resolved.service... Feb 12 20:22:38.836901 systemd[1]: Starting systemd-timesyncd.service... Feb 12 20:22:38.843866 systemd[1]: Starting systemd-update-utmp.service... Feb 12 20:22:38.856218 systemd[1]: Finished clean-ca-certificates.service. Feb 12 20:22:38.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:38.869115 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 12 20:22:38.885000 audit[1753]: SYSTEM_BOOT pid=1753 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 12 20:22:38.892486 systemd[1]: Finished systemd-update-utmp.service. Feb 12 20:22:38.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:38.908137 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 12 20:22:38.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:38.986000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 12 20:22:38.986000 audit[1769]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffedd85c50 a2=420 a3=0 items=0 ppid=1746 pid=1769 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:38.986000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 12 20:22:38.988408 augenrules[1769]: No rules Feb 12 20:22:38.989353 systemd[1]: Finished audit-rules.service. Feb 12 20:22:39.085028 systemd-resolved[1750]: Positive Trust Anchors: Feb 12 20:22:39.085057 systemd-resolved[1750]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 12 20:22:39.085154 systemd-resolved[1750]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 12 20:22:39.105336 systemd-resolved[1750]: Defaulting to hostname 'linux'. Feb 12 20:22:39.108917 systemd[1]: Started systemd-resolved.service. Feb 12 20:22:39.112243 systemd[1]: Started systemd-timesyncd.service. Feb 12 20:22:39.114954 systemd[1]: Reached target network.target. Feb 12 20:22:39.128022 systemd[1]: Reached target nss-lookup.target. Feb 12 20:22:39.130727 systemd[1]: Reached target time-set.target. Feb 12 20:22:39.164752 systemd-networkd[1611]: eth0: Gained IPv6LL Feb 12 20:22:39.169783 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 12 20:22:39.174421 systemd[1]: Reached target network-online.target. Feb 12 20:22:39.208849 systemd-timesyncd[1752]: Contacted time server 71.162.136.44:123 (0.flatcar.pool.ntp.org). Feb 12 20:22:39.209122 systemd-timesyncd[1752]: Initial clock synchronization to Mon 2024-02-12 20:22:39.187522 UTC. Feb 12 20:22:39.209864 ldconfig[1718]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 12 20:22:39.233946 systemd[1]: Finished ldconfig.service. Feb 12 20:22:39.240679 systemd[1]: Starting systemd-update-done.service... Feb 12 20:22:39.255882 systemd[1]: Finished systemd-update-done.service. Feb 12 20:22:39.260221 systemd[1]: Reached target sysinit.target. Feb 12 20:22:39.264522 systemd[1]: Started motdgen.path. Feb 12 20:22:39.267456 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 12 20:22:39.271440 systemd[1]: Started logrotate.timer. Feb 12 20:22:39.274093 systemd[1]: Started mdadm.timer. Feb 12 20:22:39.276531 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 12 20:22:39.278929 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 12 20:22:39.279131 systemd[1]: Reached target paths.target. Feb 12 20:22:39.281454 systemd[1]: Reached target timers.target. Feb 12 20:22:39.291644 systemd[1]: Listening on dbus.socket. Feb 12 20:22:39.296502 systemd[1]: Starting docker.socket... Feb 12 20:22:39.303471 systemd[1]: Listening on sshd.socket. Feb 12 20:22:39.306106 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:22:39.307208 systemd[1]: Listening on docker.socket. Feb 12 20:22:39.309803 systemd[1]: Reached target sockets.target. Feb 12 20:22:39.312203 systemd[1]: Reached target basic.target. Feb 12 20:22:39.314823 systemd[1]: System is tainted: cgroupsv1 Feb 12 20:22:39.315118 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:22:39.315316 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 12 20:22:39.318137 systemd[1]: Started amazon-ssm-agent.service. Feb 12 20:22:39.323438 systemd[1]: Starting containerd.service... Feb 12 20:22:39.329064 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Feb 12 20:22:39.341116 systemd[1]: Starting dbus.service... Feb 12 20:22:39.346235 systemd[1]: Starting enable-oem-cloudinit.service... Feb 12 20:22:39.377608 systemd[1]: Starting extend-filesystems.service... Feb 12 20:22:39.379922 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 12 20:22:39.388883 systemd[1]: Starting motdgen.service... Feb 12 20:22:39.395858 systemd[1]: Started nvidia.service. Feb 12 20:22:39.401455 systemd[1]: Starting prepare-cni-plugins.service... Feb 12 20:22:39.408374 systemd[1]: Starting prepare-critools.service... Feb 12 20:22:39.415298 systemd[1]: Starting prepare-helm.service... Feb 12 20:22:39.424893 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 12 20:22:39.431218 systemd[1]: Starting sshd-keygen.service... Feb 12 20:22:39.445877 systemd[1]: Starting systemd-logind.service... Feb 12 20:22:39.450235 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 12 20:22:39.450383 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 12 20:22:39.453301 systemd[1]: Starting update-engine.service... Feb 12 20:22:39.476499 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 12 20:22:39.496607 jq[1803]: true Feb 12 20:22:39.510181 jq[1787]: false Feb 12 20:22:39.539811 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 12 20:22:39.540360 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 12 20:22:39.581208 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 12 20:22:39.581811 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 12 20:22:39.622324 tar[1819]: ./ Feb 12 20:22:39.622324 tar[1819]: ./macvlan Feb 12 20:22:39.662138 tar[1807]: crictl Feb 12 20:22:39.665597 tar[1809]: linux-arm64/helm Feb 12 20:22:39.667275 jq[1820]: true Feb 12 20:22:39.723284 extend-filesystems[1789]: Found nvme0n1 Feb 12 20:22:39.723446 systemd[1]: motdgen.service: Deactivated successfully. Feb 12 20:22:39.739872 dbus-daemon[1786]: [system] SELinux support is enabled Feb 12 20:22:39.724048 systemd[1]: Finished motdgen.service. Feb 12 20:22:39.740204 systemd[1]: Started dbus.service. Feb 12 20:22:39.743643 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 12 20:22:39.743689 systemd[1]: Reached target system-config.target. Feb 12 20:22:39.743893 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 12 20:22:39.743935 systemd[1]: Reached target user-config.target. Feb 12 20:22:39.798582 dbus-daemon[1786]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1611 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 12 20:22:39.799036 extend-filesystems[1789]: Found nvme0n1p9 Feb 12 20:22:39.808386 extend-filesystems[1789]: Checking size of /dev/nvme0n1p9 Feb 12 20:22:39.804431 systemd[1]: Starting systemd-hostnamed.service... Feb 12 20:22:39.848714 systemd[1]: Created slice system-sshd.slice. Feb 12 20:22:39.874325 extend-filesystems[1789]: Resized partition /dev/nvme0n1p9 Feb 12 20:22:39.892775 extend-filesystems[1859]: resize2fs 1.46.5 (30-Dec-2021) Feb 12 20:22:40.004587 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Feb 12 20:22:40.024999 update_engine[1802]: I0212 20:22:40.023826 1802 main.cc:92] Flatcar Update Engine starting Feb 12 20:22:40.048759 systemd[1]: Started update-engine.service. Feb 12 20:22:40.057205 systemd[1]: Started locksmithd.service. Feb 12 20:22:40.105363 update_engine[1802]: I0212 20:22:40.049140 1802 update_check_scheduler.cc:74] Next update check in 7m39s Feb 12 20:22:40.114594 amazon-ssm-agent[1782]: 2024/02/12 20:22:40 Failed to load instance info from vault. RegistrationKey does not exist. Feb 12 20:22:40.135172 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Feb 12 20:22:40.122934 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 12 20:22:40.135420 bash[1863]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:22:40.147014 extend-filesystems[1859]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Feb 12 20:22:40.147014 extend-filesystems[1859]: old_desc_blocks = 1, new_desc_blocks = 1 Feb 12 20:22:40.147014 extend-filesystems[1859]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Feb 12 20:22:40.165591 extend-filesystems[1789]: Resized filesystem in /dev/nvme0n1p9 Feb 12 20:22:40.171078 amazon-ssm-agent[1782]: Initializing new seelog logger Feb 12 20:22:40.171078 amazon-ssm-agent[1782]: New Seelog Logger Creation Complete Feb 12 20:22:40.171078 amazon-ssm-agent[1782]: 2024/02/12 20:22:40 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 20:22:40.171078 amazon-ssm-agent[1782]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Feb 12 20:22:40.171078 amazon-ssm-agent[1782]: 2024/02/12 20:22:40 processing appconfig overrides Feb 12 20:22:40.178482 extend-filesystems[1789]: Found nvme0n1p1 Feb 12 20:22:40.178482 extend-filesystems[1789]: Found nvme0n1p2 Feb 12 20:22:40.178482 extend-filesystems[1789]: Found nvme0n1p3 Feb 12 20:22:40.178482 extend-filesystems[1789]: Found usr Feb 12 20:22:40.178482 extend-filesystems[1789]: Found nvme0n1p4 Feb 12 20:22:40.178482 extend-filesystems[1789]: Found nvme0n1p6 Feb 12 20:22:40.178482 extend-filesystems[1789]: Found nvme0n1p7 Feb 12 20:22:40.173295 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 12 20:22:40.226891 tar[1819]: ./static Feb 12 20:22:40.259243 env[1824]: time="2024-02-12T20:22:40.259147466Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 12 20:22:40.324261 systemd[1]: Finished extend-filesystems.service. Feb 12 20:22:40.331053 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 12 20:22:40.332718 systemd[1]: Finished systemd-machine-id-commit.service. Feb 12 20:22:40.379657 systemd[1]: nvidia.service: Deactivated successfully. Feb 12 20:22:40.402472 systemd-logind[1800]: Watching system buttons on /dev/input/event0 (Power Button) Feb 12 20:22:40.408736 systemd-logind[1800]: New seat seat0. Feb 12 20:22:40.415089 systemd[1]: Started systemd-logind.service. Feb 12 20:22:40.503020 env[1824]: time="2024-02-12T20:22:40.502935847Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 12 20:22:40.503262 env[1824]: time="2024-02-12T20:22:40.503203381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:22:40.510244 env[1824]: time="2024-02-12T20:22:40.510156224Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:22:40.510244 env[1824]: time="2024-02-12T20:22:40.510233594Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:22:40.521031 env[1824]: time="2024-02-12T20:22:40.520951221Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:22:40.521031 env[1824]: time="2024-02-12T20:22:40.521020561Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 12 20:22:40.521276 env[1824]: time="2024-02-12T20:22:40.521059833Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 12 20:22:40.521276 env[1824]: time="2024-02-12T20:22:40.521085468Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 12 20:22:40.521391 env[1824]: time="2024-02-12T20:22:40.521322190Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:22:40.522736 tar[1819]: ./vlan Feb 12 20:22:40.527787 env[1824]: time="2024-02-12T20:22:40.527718777Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 12 20:22:40.528231 env[1824]: time="2024-02-12T20:22:40.528150830Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 12 20:22:40.528231 env[1824]: time="2024-02-12T20:22:40.528218516Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 12 20:22:40.528433 env[1824]: time="2024-02-12T20:22:40.528381680Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 12 20:22:40.528433 env[1824]: time="2024-02-12T20:22:40.528415032Z" level=info msg="metadata content store policy set" policy=shared Feb 12 20:22:40.545833 env[1824]: time="2024-02-12T20:22:40.545764399Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 12 20:22:40.546014 env[1824]: time="2024-02-12T20:22:40.545840127Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 12 20:22:40.546014 env[1824]: time="2024-02-12T20:22:40.545873563Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 12 20:22:40.546014 env[1824]: time="2024-02-12T20:22:40.545951736Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 12 20:22:40.546014 env[1824]: time="2024-02-12T20:22:40.545990397Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 12 20:22:40.546247 env[1824]: time="2024-02-12T20:22:40.546024264Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 12 20:22:40.546247 env[1824]: time="2024-02-12T20:22:40.546056561Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 12 20:22:40.546761 env[1824]: time="2024-02-12T20:22:40.546696431Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 12 20:22:40.546908 env[1824]: time="2024-02-12T20:22:40.546767125Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 12 20:22:40.546908 env[1824]: time="2024-02-12T20:22:40.546802251Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 12 20:22:40.546908 env[1824]: time="2024-02-12T20:22:40.546837053Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 12 20:22:40.546908 env[1824]: time="2024-02-12T20:22:40.546868763Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 12 20:22:40.547126 env[1824]: time="2024-02-12T20:22:40.547089115Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 12 20:22:40.547298 env[1824]: time="2024-02-12T20:22:40.547255299Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 12 20:22:40.549822 env[1824]: time="2024-02-12T20:22:40.549747319Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 12 20:22:40.549991 env[1824]: time="2024-02-12T20:22:40.549837008Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 12 20:22:40.549991 env[1824]: time="2024-02-12T20:22:40.549877263Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 12 20:22:40.549991 env[1824]: time="2024-02-12T20:22:40.549981501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 12 20:22:40.550165 env[1824]: time="2024-02-12T20:22:40.550014206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 12 20:22:40.550165 env[1824]: time="2024-02-12T20:22:40.550047186Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 12 20:22:40.550165 env[1824]: time="2024-02-12T20:22:40.550076332Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 12 20:22:40.550165 env[1824]: time="2024-02-12T20:22:40.550107814Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 12 20:22:40.550165 env[1824]: time="2024-02-12T20:22:40.550138482Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 12 20:22:40.550436 env[1824]: time="2024-02-12T20:22:40.550168238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 12 20:22:40.550436 env[1824]: time="2024-02-12T20:22:40.550198894Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 12 20:22:40.550436 env[1824]: time="2024-02-12T20:22:40.550235182Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 12 20:22:40.550663 env[1824]: time="2024-02-12T20:22:40.550571313Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 12 20:22:40.550663 env[1824]: time="2024-02-12T20:22:40.550611184Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 12 20:22:40.550781 env[1824]: time="2024-02-12T20:22:40.550657000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 12 20:22:40.550781 env[1824]: time="2024-02-12T20:22:40.550689824Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 12 20:22:40.550781 env[1824]: time="2024-02-12T20:22:40.550722158Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 12 20:22:40.550781 env[1824]: time="2024-02-12T20:22:40.550749673Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 12 20:22:40.551009 env[1824]: time="2024-02-12T20:22:40.550784463Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 12 20:22:40.551009 env[1824]: time="2024-02-12T20:22:40.550848435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 12 20:22:40.562978 env[1824]: time="2024-02-12T20:22:40.562781914Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 12 20:22:40.564096 env[1824]: time="2024-02-12T20:22:40.562942873Z" level=info msg="Connect containerd service" Feb 12 20:22:40.564096 env[1824]: time="2024-02-12T20:22:40.563958673Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 12 20:22:40.565009 env[1824]: time="2024-02-12T20:22:40.564938832Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:22:40.565472 env[1824]: time="2024-02-12T20:22:40.565416557Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 12 20:22:40.565586 env[1824]: time="2024-02-12T20:22:40.565535355Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 12 20:22:40.565989 env[1824]: time="2024-02-12T20:22:40.565657126Z" level=info msg="containerd successfully booted in 0.321311s" Feb 12 20:22:40.565814 systemd[1]: Started containerd.service. Feb 12 20:22:40.581603 env[1824]: time="2024-02-12T20:22:40.581410756Z" level=info msg="Start subscribing containerd event" Feb 12 20:22:40.581603 env[1824]: time="2024-02-12T20:22:40.581530933Z" level=info msg="Start recovering state" Feb 12 20:22:40.581806 env[1824]: time="2024-02-12T20:22:40.581678099Z" level=info msg="Start event monitor" Feb 12 20:22:40.581806 env[1824]: time="2024-02-12T20:22:40.581720870Z" level=info msg="Start snapshots syncer" Feb 12 20:22:40.581806 env[1824]: time="2024-02-12T20:22:40.581746025Z" level=info msg="Start cni network conf syncer for default" Feb 12 20:22:40.581806 env[1824]: time="2024-02-12T20:22:40.581768112Z" level=info msg="Start streaming server" Feb 12 20:22:40.687977 dbus-daemon[1786]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 12 20:22:40.688206 systemd[1]: Started systemd-hostnamed.service. Feb 12 20:22:40.693472 dbus-daemon[1786]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1845 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 12 20:22:40.698424 systemd[1]: Starting polkit.service... Feb 12 20:22:40.757525 polkitd[1928]: Started polkitd version 121 Feb 12 20:22:40.788506 polkitd[1928]: Loading rules from directory /etc/polkit-1/rules.d Feb 12 20:22:40.788866 polkitd[1928]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 12 20:22:40.796827 polkitd[1928]: Finished loading, compiling and executing 2 rules Feb 12 20:22:40.798848 dbus-daemon[1786]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 12 20:22:40.799105 systemd[1]: Started polkit.service. Feb 12 20:22:40.800644 polkitd[1928]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 12 20:22:40.830127 tar[1819]: ./portmap Feb 12 20:22:40.867987 systemd-hostnamed[1845]: Hostname set to (transient) Feb 12 20:22:40.868173 systemd-resolved[1750]: System hostname changed to 'ip-172-31-16-195'. Feb 12 20:22:40.905586 coreos-metadata[1784]: Feb 12 20:22:40.905 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Feb 12 20:22:40.908172 coreos-metadata[1784]: Feb 12 20:22:40.908 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys: Attempt #1 Feb 12 20:22:40.909865 coreos-metadata[1784]: Feb 12 20:22:40.909 INFO Fetch successful Feb 12 20:22:40.909865 coreos-metadata[1784]: Feb 12 20:22:40.909 INFO Fetching http://169.254.169.254/2019-10-01/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 12 20:22:40.911136 coreos-metadata[1784]: Feb 12 20:22:40.911 INFO Fetch successful Feb 12 20:22:40.914989 unknown[1784]: wrote ssh authorized keys file for user: core Feb 12 20:22:40.945314 update-ssh-keys[1950]: Updated "/home/core/.ssh/authorized_keys" Feb 12 20:22:40.946641 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Feb 12 20:22:41.083756 tar[1819]: ./host-local Feb 12 20:22:41.158099 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO Create new startup processor Feb 12 20:22:41.158705 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [LongRunningPluginsManager] registered plugins: {} Feb 12 20:22:41.158705 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO Initializing bookkeeping folders Feb 12 20:22:41.158705 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO removing the completed state files Feb 12 20:22:41.158705 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO Initializing bookkeeping folders for long running plugins Feb 12 20:22:41.158940 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO Initializing replies folder for MDS reply requests that couldn't reach the service Feb 12 20:22:41.158940 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO Initializing healthcheck folders for long running plugins Feb 12 20:22:41.158940 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO Initializing locations for inventory plugin Feb 12 20:22:41.158940 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO Initializing default location for custom inventory Feb 12 20:22:41.158940 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO Initializing default location for file inventory Feb 12 20:22:41.158940 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO Initializing default location for role inventory Feb 12 20:22:41.158940 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO Init the cloudwatchlogs publisher Feb 12 20:22:41.158940 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [instanceID=i-0615a445102598199] Successfully loaded platform independent plugin aws:runDockerAction Feb 12 20:22:41.159337 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [instanceID=i-0615a445102598199] Successfully loaded platform independent plugin aws:configurePackage Feb 12 20:22:41.159337 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [instanceID=i-0615a445102598199] Successfully loaded platform independent plugin aws:runDocument Feb 12 20:22:41.159337 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [instanceID=i-0615a445102598199] Successfully loaded platform independent plugin aws:softwareInventory Feb 12 20:22:41.159337 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [instanceID=i-0615a445102598199] Successfully loaded platform independent plugin aws:runPowerShellScript Feb 12 20:22:41.159337 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [instanceID=i-0615a445102598199] Successfully loaded platform independent plugin aws:updateSsmAgent Feb 12 20:22:41.159337 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [instanceID=i-0615a445102598199] Successfully loaded platform independent plugin aws:configureDocker Feb 12 20:22:41.159337 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [instanceID=i-0615a445102598199] Successfully loaded platform independent plugin aws:refreshAssociation Feb 12 20:22:41.159337 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [instanceID=i-0615a445102598199] Successfully loaded platform independent plugin aws:downloadContent Feb 12 20:22:41.159337 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [instanceID=i-0615a445102598199] Successfully loaded platform dependent plugin aws:runShellScript Feb 12 20:22:41.159337 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO Starting Agent: amazon-ssm-agent - v2.3.1319.0 Feb 12 20:22:41.159337 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO OS: linux, Arch: arm64 Feb 12 20:22:41.163589 amazon-ssm-agent[1782]: datastore file /var/lib/amazon/ssm/i-0615a445102598199/longrunningplugins/datastore/store doesn't exist - no long running plugins to execute Feb 12 20:22:41.195976 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessagingDeliveryService] Starting document processing engine... Feb 12 20:22:41.285761 tar[1819]: ./vrf Feb 12 20:22:41.292105 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessagingDeliveryService] [EngineProcessor] Starting Feb 12 20:22:41.387249 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessagingDeliveryService] [EngineProcessor] Initial processing Feb 12 20:22:41.481899 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessagingDeliveryService] Starting message polling Feb 12 20:22:41.488475 tar[1819]: ./bridge Feb 12 20:22:41.576557 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessagingDeliveryService] Starting send replies to MDS Feb 12 20:22:41.636205 tar[1819]: ./tuning Feb 12 20:22:41.671572 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [instanceID=i-0615a445102598199] Starting association polling Feb 12 20:22:41.743366 tar[1819]: ./firewall Feb 12 20:22:41.766673 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Starting Feb 12 20:22:41.862044 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessagingDeliveryService] [Association] Launching response handler Feb 12 20:22:41.877317 tar[1819]: ./host-device Feb 12 20:22:41.957535 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessagingDeliveryService] [Association] [EngineProcessor] Initial processing Feb 12 20:22:41.993555 tar[1819]: ./sbr Feb 12 20:22:42.053309 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessagingDeliveryService] [Association] Initializing association scheduling service Feb 12 20:22:42.100626 tar[1819]: ./loopback Feb 12 20:22:42.136277 systemd[1]: Finished prepare-critools.service. Feb 12 20:22:42.149229 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessagingDeliveryService] [Association] Association scheduling service initialized Feb 12 20:22:42.167245 tar[1809]: linux-arm64/LICENSE Feb 12 20:22:42.168043 tar[1809]: linux-arm64/README.md Feb 12 20:22:42.184476 tar[1819]: ./dhcp Feb 12 20:22:42.185367 systemd[1]: Finished prepare-helm.service. Feb 12 20:22:42.245447 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [OfflineService] Starting document processing engine... Feb 12 20:22:42.333402 tar[1819]: ./ptp Feb 12 20:22:42.341693 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [OfflineService] [EngineProcessor] Starting Feb 12 20:22:42.395501 tar[1819]: ./ipvlan Feb 12 20:22:42.438428 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [OfflineService] [EngineProcessor] Initial processing Feb 12 20:22:42.457677 tar[1819]: ./bandwidth Feb 12 20:22:42.539782 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [OfflineService] Starting message polling Feb 12 20:22:42.546638 systemd[1]: Finished prepare-cni-plugins.service. Feb 12 20:22:42.617752 locksmithd[1874]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 12 20:22:42.636707 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [OfflineService] Starting send replies to MDS Feb 12 20:22:42.733846 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [LongRunningPluginsManager] starting long running plugin manager Feb 12 20:22:42.831146 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [LongRunningPluginsManager] there aren't any long running plugin to execute Feb 12 20:22:42.928636 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [HealthCheck] HealthCheck reporting agent health. Feb 12 20:22:43.026260 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessageGatewayService] Starting session document processing engine... Feb 12 20:22:43.124257 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessageGatewayService] [EngineProcessor] Starting Feb 12 20:22:43.222233 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessageGatewayService] SSM Agent is trying to setup control channel for Session Manager module. Feb 12 20:22:43.320578 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessageGatewayService] Setting up websocket for controlchannel for instance: i-0615a445102598199, requestId: acc7aa56-ffb0-449f-b1ab-7d0d1bb4e2bf Feb 12 20:22:43.419079 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessageGatewayService] listening reply. Feb 12 20:22:43.517722 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [LongRunningPluginsManager] There are no long running plugins currently getting executed - skipping their healthcheck Feb 12 20:22:43.616653 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [StartupProcessor] Executing startup processor tasks Feb 12 20:22:43.715749 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [StartupProcessor] Write to serial port: Amazon SSM Agent v2.3.1319.0 is running Feb 12 20:22:43.814949 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [StartupProcessor] Write to serial port: OsProductName: Flatcar Container Linux by Kinvolk Feb 12 20:22:43.914364 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [StartupProcessor] Write to serial port: OsVersion: 3510.3.2 Feb 12 20:22:44.014133 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessageGatewayService] Opening websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0615a445102598199?role=subscribe&stream=input Feb 12 20:22:44.113989 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessageGatewayService] Successfully opened websocket connection to: wss://ssmmessages.us-west-2.amazonaws.com/v1/control-channel/i-0615a445102598199?role=subscribe&stream=input Feb 12 20:22:44.214012 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessageGatewayService] Starting receiving message from control channel Feb 12 20:22:44.314335 amazon-ssm-agent[1782]: 2024-02-12 20:22:41 INFO [MessageGatewayService] [EngineProcessor] Initial processing Feb 12 20:22:45.026132 sshd_keygen[1839]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 12 20:22:45.066268 systemd[1]: Finished sshd-keygen.service. Feb 12 20:22:45.073930 systemd[1]: Starting issuegen.service... Feb 12 20:22:45.081148 systemd[1]: Started sshd@0-172.31.16.195:22-147.75.109.163:46906.service. Feb 12 20:22:45.095295 systemd[1]: issuegen.service: Deactivated successfully. Feb 12 20:22:45.095866 systemd[1]: Finished issuegen.service. Feb 12 20:22:45.101906 systemd[1]: Starting systemd-user-sessions.service... Feb 12 20:22:45.119840 systemd[1]: Finished systemd-user-sessions.service. Feb 12 20:22:45.126917 systemd[1]: Started getty@tty1.service. Feb 12 20:22:45.134872 systemd[1]: Started serial-getty@ttyS0.service. Feb 12 20:22:45.138178 systemd[1]: Reached target getty.target. Feb 12 20:22:45.141791 systemd[1]: Reached target multi-user.target. Feb 12 20:22:45.147624 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 12 20:22:45.164436 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 12 20:22:45.165036 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 12 20:22:45.172937 systemd[1]: Startup finished in 13.015s (kernel) + 13.695s (userspace) = 26.711s. Feb 12 20:22:45.305346 sshd[2022]: Accepted publickey for core from 147.75.109.163 port 46906 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:22:45.309324 sshd[2022]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:45.327408 systemd[1]: Created slice user-500.slice. Feb 12 20:22:45.329663 systemd[1]: Starting user-runtime-dir@500.service... Feb 12 20:22:45.334794 systemd-logind[1800]: New session 1 of user core. Feb 12 20:22:45.351982 systemd[1]: Finished user-runtime-dir@500.service. Feb 12 20:22:45.356005 systemd[1]: Starting user@500.service... Feb 12 20:22:45.363062 (systemd)[2036]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:45.549832 systemd[2036]: Queued start job for default target default.target. Feb 12 20:22:45.550846 systemd[2036]: Reached target paths.target. Feb 12 20:22:45.551082 systemd[2036]: Reached target sockets.target. Feb 12 20:22:45.551270 systemd[2036]: Reached target timers.target. Feb 12 20:22:45.551438 systemd[2036]: Reached target basic.target. Feb 12 20:22:45.551745 systemd[2036]: Reached target default.target. Feb 12 20:22:45.551876 systemd[1]: Started user@500.service. Feb 12 20:22:45.552076 systemd[2036]: Startup finished in 177ms. Feb 12 20:22:45.553949 systemd[1]: Started session-1.scope. Feb 12 20:22:45.704600 systemd[1]: Started sshd@1-172.31.16.195:22-147.75.109.163:41298.service. Feb 12 20:22:45.883381 sshd[2045]: Accepted publickey for core from 147.75.109.163 port 41298 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:22:45.886746 sshd[2045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:45.896673 systemd-logind[1800]: New session 2 of user core. Feb 12 20:22:45.898358 systemd[1]: Started session-2.scope. Feb 12 20:22:46.037906 sshd[2045]: pam_unix(sshd:session): session closed for user core Feb 12 20:22:46.044150 systemd-logind[1800]: Session 2 logged out. Waiting for processes to exit. Feb 12 20:22:46.045999 systemd[1]: sshd@1-172.31.16.195:22-147.75.109.163:41298.service: Deactivated successfully. Feb 12 20:22:46.048520 systemd[1]: session-2.scope: Deactivated successfully. Feb 12 20:22:46.050087 systemd-logind[1800]: Removed session 2. Feb 12 20:22:46.063474 systemd[1]: Started sshd@2-172.31.16.195:22-147.75.109.163:41308.service. Feb 12 20:22:46.243073 sshd[2052]: Accepted publickey for core from 147.75.109.163 port 41308 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:22:46.246467 sshd[2052]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:46.256866 systemd[1]: Started session-3.scope. Feb 12 20:22:46.258683 systemd-logind[1800]: New session 3 of user core. Feb 12 20:22:46.388397 sshd[2052]: pam_unix(sshd:session): session closed for user core Feb 12 20:22:46.395228 systemd-logind[1800]: Session 3 logged out. Waiting for processes to exit. Feb 12 20:22:46.395828 systemd[1]: sshd@2-172.31.16.195:22-147.75.109.163:41308.service: Deactivated successfully. Feb 12 20:22:46.398122 systemd[1]: session-3.scope: Deactivated successfully. Feb 12 20:22:46.399284 systemd-logind[1800]: Removed session 3. Feb 12 20:22:46.415310 systemd[1]: Started sshd@3-172.31.16.195:22-147.75.109.163:41312.service. Feb 12 20:22:46.590385 sshd[2059]: Accepted publickey for core from 147.75.109.163 port 41312 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:22:46.591441 sshd[2059]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:46.600349 systemd[1]: Started session-4.scope. Feb 12 20:22:46.601917 systemd-logind[1800]: New session 4 of user core. Feb 12 20:22:46.735142 sshd[2059]: pam_unix(sshd:session): session closed for user core Feb 12 20:22:46.741011 systemd-logind[1800]: Session 4 logged out. Waiting for processes to exit. Feb 12 20:22:46.742870 systemd[1]: sshd@3-172.31.16.195:22-147.75.109.163:41312.service: Deactivated successfully. Feb 12 20:22:46.744474 systemd[1]: session-4.scope: Deactivated successfully. Feb 12 20:22:46.746970 systemd-logind[1800]: Removed session 4. Feb 12 20:22:46.761325 systemd[1]: Started sshd@4-172.31.16.195:22-147.75.109.163:41318.service. Feb 12 20:22:46.934318 sshd[2066]: Accepted publickey for core from 147.75.109.163 port 41318 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:22:46.937430 sshd[2066]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:46.946317 systemd[1]: Started session-5.scope. Feb 12 20:22:46.947304 systemd-logind[1800]: New session 5 of user core. Feb 12 20:22:47.069321 sudo[2070]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 12 20:22:47.070450 sudo[2070]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:22:47.083837 dbus-daemon[1786]: avc: received setenforce notice (enforcing=1) Feb 12 20:22:47.087352 sudo[2070]: pam_unix(sudo:session): session closed for user root Feb 12 20:22:47.112373 sshd[2066]: pam_unix(sshd:session): session closed for user core Feb 12 20:22:47.118245 systemd-logind[1800]: Session 5 logged out. Waiting for processes to exit. Feb 12 20:22:47.118734 systemd[1]: sshd@4-172.31.16.195:22-147.75.109.163:41318.service: Deactivated successfully. Feb 12 20:22:47.120400 systemd[1]: session-5.scope: Deactivated successfully. Feb 12 20:22:47.122160 systemd-logind[1800]: Removed session 5. Feb 12 20:22:47.139733 systemd[1]: Started sshd@5-172.31.16.195:22-147.75.109.163:41320.service. Feb 12 20:22:47.314755 sshd[2074]: Accepted publickey for core from 147.75.109.163 port 41320 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:22:47.317839 sshd[2074]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:47.326221 systemd[1]: Started session-6.scope. Feb 12 20:22:47.329473 systemd-logind[1800]: New session 6 of user core. Feb 12 20:22:47.438748 sudo[2079]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 12 20:22:47.439818 sudo[2079]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:22:47.446326 sudo[2079]: pam_unix(sudo:session): session closed for user root Feb 12 20:22:47.456899 sudo[2078]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 12 20:22:47.457465 sudo[2078]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:22:47.478821 systemd[1]: Stopping audit-rules.service... Feb 12 20:22:47.479000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 20:22:47.482674 kernel: kauditd_printk_skb: 37 callbacks suppressed Feb 12 20:22:47.482803 kernel: audit: type=1305 audit(1707769367.479:129): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Feb 12 20:22:47.479000 audit[2082]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffefd8cf50 a2=420 a3=0 items=0 ppid=1 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:47.499314 kernel: audit: type=1300 audit(1707769367.479:129): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffefd8cf50 a2=420 a3=0 items=0 ppid=1 pid=2082 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:47.499442 auditctl[2082]: No rules Feb 12 20:22:47.479000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Feb 12 20:22:47.503575 kernel: audit: type=1327 audit(1707769367.479:129): proctitle=2F7362696E2F617564697463746C002D44 Feb 12 20:22:47.500190 systemd[1]: audit-rules.service: Deactivated successfully. Feb 12 20:22:47.500782 systemd[1]: Stopped audit-rules.service. Feb 12 20:22:47.504256 systemd[1]: Starting audit-rules.service... Feb 12 20:22:47.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:47.515957 kernel: audit: type=1131 audit(1707769367.498:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:47.546108 augenrules[2100]: No rules Feb 12 20:22:47.548199 systemd[1]: Finished audit-rules.service. Feb 12 20:22:47.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:47.550737 sudo[2078]: pam_unix(sudo:session): session closed for user root Feb 12 20:22:47.547000 audit[2078]: USER_END pid=2078 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:47.568611 kernel: audit: type=1130 audit(1707769367.547:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:47.568772 kernel: audit: type=1106 audit(1707769367.547:132): pid=2078 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:47.568826 kernel: audit: type=1104 audit(1707769367.547:133): pid=2078 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:47.547000 audit[2078]: CRED_DISP pid=2078 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:47.580871 sshd[2074]: pam_unix(sshd:session): session closed for user core Feb 12 20:22:47.582000 audit[2074]: USER_END pid=2074 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:22:47.588041 systemd[1]: sshd@5-172.31.16.195:22-147.75.109.163:41320.service: Deactivated successfully. Feb 12 20:22:47.589833 systemd[1]: session-6.scope: Deactivated successfully. Feb 12 20:22:47.583000 audit[2074]: CRED_DISP pid=2074 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:22:47.607336 kernel: audit: type=1106 audit(1707769367.582:134): pid=2074 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:22:47.607471 kernel: audit: type=1104 audit(1707769367.583:135): pid=2074 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:22:47.607604 systemd-logind[1800]: Session 6 logged out. Waiting for processes to exit. Feb 12 20:22:47.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.16.195:22-147.75.109.163:41320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:47.616103 systemd[1]: Started sshd@6-172.31.16.195:22-147.75.109.163:41332.service. Feb 12 20:22:47.618112 kernel: audit: type=1131 audit(1707769367.583:136): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.16.195:22-147.75.109.163:41320 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:47.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.195:22-147.75.109.163:41332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:47.619471 systemd-logind[1800]: Removed session 6. Feb 12 20:22:47.792000 audit[2107]: USER_ACCT pid=2107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:22:47.793588 sshd[2107]: Accepted publickey for core from 147.75.109.163 port 41332 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:22:47.794000 audit[2107]: CRED_ACQ pid=2107 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:22:47.795000 audit[2107]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd4fb8650 a2=3 a3=1 items=0 ppid=1 pid=2107 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:47.795000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:22:47.796804 sshd[2107]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:22:47.806842 systemd[1]: Started session-7.scope. Feb 12 20:22:47.809627 systemd-logind[1800]: New session 7 of user core. Feb 12 20:22:47.821000 audit[2107]: USER_START pid=2107 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:22:47.824000 audit[2110]: CRED_ACQ pid=2110 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:22:47.921000 audit[2111]: USER_ACCT pid=2111 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:47.922191 sudo[2111]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 12 20:22:47.921000 audit[2111]: CRED_REFR pid=2111 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:47.922794 sudo[2111]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 12 20:22:47.925000 audit[2111]: USER_START pid=2111 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:22:48.641685 systemd[1]: Starting docker.service... Feb 12 20:22:48.715243 env[2126]: time="2024-02-12T20:22:48.715154932Z" level=info msg="Starting up" Feb 12 20:22:48.718916 env[2126]: time="2024-02-12T20:22:48.718847335Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:22:48.718916 env[2126]: time="2024-02-12T20:22:48.718896496Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:22:48.719163 env[2126]: time="2024-02-12T20:22:48.718941784Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:22:48.719163 env[2126]: time="2024-02-12T20:22:48.718968151Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:22:48.723256 env[2126]: time="2024-02-12T20:22:48.723199074Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 12 20:22:48.723498 env[2126]: time="2024-02-12T20:22:48.723461666Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 12 20:22:48.723719 env[2126]: time="2024-02-12T20:22:48.723678288Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 12 20:22:48.723859 env[2126]: time="2024-02-12T20:22:48.723826575Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 12 20:22:48.738752 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1596608050-merged.mount: Deactivated successfully. Feb 12 20:22:49.234561 env[2126]: time="2024-02-12T20:22:49.234481788Z" level=warning msg="Your kernel does not support cgroup blkio weight" Feb 12 20:22:49.234879 env[2126]: time="2024-02-12T20:22:49.234566949Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Feb 12 20:22:49.235006 env[2126]: time="2024-02-12T20:22:49.234965247Z" level=info msg="Loading containers: start." Feb 12 20:22:49.322000 audit[2157]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.322000 audit[2157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffdaaf32d0 a2=0 a3=1 items=0 ppid=2126 pid=2157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.322000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Feb 12 20:22:49.327000 audit[2159]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.327000 audit[2159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc622d090 a2=0 a3=1 items=0 ppid=2126 pid=2159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.327000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Feb 12 20:22:49.332000 audit[2161]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.332000 audit[2161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc542e6e0 a2=0 a3=1 items=0 ppid=2126 pid=2161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.332000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 12 20:22:49.338000 audit[2163]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.338000 audit[2163]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffcbf53bc0 a2=0 a3=1 items=0 ppid=2126 pid=2163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.338000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 12 20:22:49.344000 audit[2165]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.344000 audit[2165]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffef6f8500 a2=0 a3=1 items=0 ppid=2126 pid=2165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.344000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Feb 12 20:22:49.370000 audit[2170]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.370000 audit[2170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdb8abbe0 a2=0 a3=1 items=0 ppid=2126 pid=2170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.370000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Feb 12 20:22:49.384000 audit[2172]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2172 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.384000 audit[2172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffd8d9c90 a2=0 a3=1 items=0 ppid=2126 pid=2172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.384000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Feb 12 20:22:49.389000 audit[2174]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.389000 audit[2174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc4c14160 a2=0 a3=1 items=0 ppid=2126 pid=2174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.389000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Feb 12 20:22:49.393000 audit[2176]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.393000 audit[2176]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffede07fa0 a2=0 a3=1 items=0 ppid=2126 pid=2176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.393000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 12 20:22:49.412000 audit[2180]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.412000 audit[2180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcdfdd410 a2=0 a3=1 items=0 ppid=2126 pid=2180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.412000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 12 20:22:49.415000 audit[2181]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2181 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.415000 audit[2181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff937a6e0 a2=0 a3=1 items=0 ppid=2126 pid=2181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.415000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 12 20:22:49.428756 kernel: Initializing XFRM netlink socket Feb 12 20:22:49.476178 env[2126]: time="2024-02-12T20:22:49.476070411Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 12 20:22:49.478201 (udev-worker)[2137]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:22:49.518000 audit[2189]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2189 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.518000 audit[2189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=ffffe34ab2f0 a2=0 a3=1 items=0 ppid=2126 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.518000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Feb 12 20:22:49.542000 audit[2192]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.542000 audit[2192]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffd006f850 a2=0 a3=1 items=0 ppid=2126 pid=2192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.542000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Feb 12 20:22:49.549000 audit[2195]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2195 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.549000 audit[2195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc517e7d0 a2=0 a3=1 items=0 ppid=2126 pid=2195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.549000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Feb 12 20:22:49.554000 audit[2197]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.554000 audit[2197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffffc1aab50 a2=0 a3=1 items=0 ppid=2126 pid=2197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.554000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Feb 12 20:22:49.559000 audit[2199]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2199 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.559000 audit[2199]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffc6ca6b40 a2=0 a3=1 items=0 ppid=2126 pid=2199 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.559000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Feb 12 20:22:49.564000 audit[2201]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2201 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.564000 audit[2201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffcf57f4a0 a2=0 a3=1 items=0 ppid=2126 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.564000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Feb 12 20:22:49.568000 audit[2203]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2203 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.568000 audit[2203]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffe4b74160 a2=0 a3=1 items=0 ppid=2126 pid=2203 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.568000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Feb 12 20:22:49.584000 audit[2206]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2206 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.584000 audit[2206]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffef9b8890 a2=0 a3=1 items=0 ppid=2126 pid=2206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.584000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Feb 12 20:22:49.590000 audit[2208]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2208 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.590000 audit[2208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=fffff61018b0 a2=0 a3=1 items=0 ppid=2126 pid=2208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.590000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Feb 12 20:22:49.596000 audit[2210]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2210 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.596000 audit[2210]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=fffff8e3cc30 a2=0 a3=1 items=0 ppid=2126 pid=2210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.596000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Feb 12 20:22:49.601000 audit[2212]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2212 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.601000 audit[2212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffde38c80 a2=0 a3=1 items=0 ppid=2126 pid=2212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.601000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Feb 12 20:22:49.603742 systemd-networkd[1611]: docker0: Link UP Feb 12 20:22:49.619000 audit[2216]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2216 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.619000 audit[2216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe8504330 a2=0 a3=1 items=0 ppid=2126 pid=2216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.619000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Feb 12 20:22:49.623000 audit[2217]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2217 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:22:49.623000 audit[2217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd30ce7f0 a2=0 a3=1 items=0 ppid=2126 pid=2217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:22:49.623000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Feb 12 20:22:49.626372 env[2126]: time="2024-02-12T20:22:49.626308455Z" level=info msg="Loading containers: done." Feb 12 20:22:49.655762 amazon-ssm-agent[1782]: 2024-02-12 20:22:49 INFO [MessagingDeliveryService] [Association] No associations on boot. Requerying for associations after 30 seconds. Feb 12 20:22:49.657348 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1499143368-merged.mount: Deactivated successfully. Feb 12 20:22:49.663494 env[2126]: time="2024-02-12T20:22:49.663409185Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 12 20:22:49.664257 env[2126]: time="2024-02-12T20:22:49.664206104Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 12 20:22:49.664842 env[2126]: time="2024-02-12T20:22:49.664782110Z" level=info msg="Daemon has completed initialization" Feb 12 20:22:49.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:49.695820 systemd[1]: Started docker.service. Feb 12 20:22:49.710589 env[2126]: time="2024-02-12T20:22:49.710447638Z" level=info msg="API listen on /run/docker.sock" Feb 12 20:22:49.748017 systemd[1]: Reloading. Feb 12 20:22:49.895861 /usr/lib/systemd/system-generators/torcx-generator[2263]: time="2024-02-12T20:22:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:22:49.897237 /usr/lib/systemd/system-generators/torcx-generator[2263]: time="2024-02-12T20:22:49Z" level=info msg="torcx already run" Feb 12 20:22:50.088351 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:22:50.088402 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:22:50.131429 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:22:50.334750 systemd[1]: Started kubelet.service. Feb 12 20:22:50.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:22:50.516561 kubelet[2324]: E0212 20:22:50.516419 2324 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:22:50.521149 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:22:50.521647 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:22:50.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 20:22:50.884942 env[1824]: time="2024-02-12T20:22:50.884850769Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 12 20:22:51.524813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202504560.mount: Deactivated successfully. Feb 12 20:22:54.068710 env[1824]: time="2024-02-12T20:22:54.068629112Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:54.072921 env[1824]: time="2024-02-12T20:22:54.072860746Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:54.076185 env[1824]: time="2024-02-12T20:22:54.076125146Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:54.080251 env[1824]: time="2024-02-12T20:22:54.080187394Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:54.082057 env[1824]: time="2024-02-12T20:22:54.082010322Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:d88fbf485621d26e515136c1848b666d7dfe0fa84ca7ebd826447b039d306d88\"" Feb 12 20:22:54.100613 env[1824]: time="2024-02-12T20:22:54.100488636Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 12 20:22:56.643917 env[1824]: time="2024-02-12T20:22:56.643846513Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:56.651711 env[1824]: time="2024-02-12T20:22:56.651645159Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:56.657979 env[1824]: time="2024-02-12T20:22:56.657922885Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:56.659761 env[1824]: time="2024-02-12T20:22:56.659664813Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:71d8e883014e0849ca9a3161bd1feac09ad210dea2f4140732e218f04a6826c2\"" Feb 12 20:22:56.666298 env[1824]: time="2024-02-12T20:22:56.666222163Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:56.678424 env[1824]: time="2024-02-12T20:22:56.678371594Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 12 20:22:58.294044 env[1824]: time="2024-02-12T20:22:58.293919855Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:58.298683 env[1824]: time="2024-02-12T20:22:58.297866847Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:58.301834 env[1824]: time="2024-02-12T20:22:58.301765451Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:58.306335 env[1824]: time="2024-02-12T20:22:58.306259788Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:22:58.309675 env[1824]: time="2024-02-12T20:22:58.309606752Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:a636f3d6300bad4775ea80ad544e38f486a039732c4871bddc1db3a5336c871a\"" Feb 12 20:22:58.327427 env[1824]: time="2024-02-12T20:22:58.327327227Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 12 20:22:59.724098 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1968468211.mount: Deactivated successfully. Feb 12 20:23:00.493027 env[1824]: time="2024-02-12T20:23:00.492922682Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:00.497751 env[1824]: time="2024-02-12T20:23:00.497660696Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:00.501303 env[1824]: time="2024-02-12T20:23:00.501232818Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:00.504379 env[1824]: time="2024-02-12T20:23:00.504300295Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:00.506311 env[1824]: time="2024-02-12T20:23:00.506218038Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:95874282cd4f2ad9bc384735e604f0380cff88d61a2ca9db65890e6d9df46926\"" Feb 12 20:23:00.525250 env[1824]: time="2024-02-12T20:23:00.525173260Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 12 20:23:00.655338 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 12 20:23:00.655723 systemd[1]: Stopped kubelet.service. Feb 12 20:23:00.665929 kernel: kauditd_printk_skb: 86 callbacks suppressed Feb 12 20:23:00.666027 kernel: audit: type=1130 audit(1707769380.655:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:00.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:00.658777 systemd[1]: Started kubelet.service. Feb 12 20:23:00.674452 kernel: audit: type=1131 audit(1707769380.655:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:00.655000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:00.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:00.682672 kernel: audit: type=1130 audit(1707769380.658:175): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:00.765950 kubelet[2366]: E0212 20:23:00.764938 2366 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:23:00.774041 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:23:00.774456 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:23:00.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 20:23:00.784613 kernel: audit: type=1131 audit(1707769380.774:176): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 20:23:01.190697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount429521571.mount: Deactivated successfully. Feb 12 20:23:01.199675 env[1824]: time="2024-02-12T20:23:01.199590230Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:01.203351 env[1824]: time="2024-02-12T20:23:01.203282044Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:01.206534 env[1824]: time="2024-02-12T20:23:01.206463828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:01.209444 env[1824]: time="2024-02-12T20:23:01.209390554Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:01.210614 env[1824]: time="2024-02-12T20:23:01.210566166Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Feb 12 20:23:01.228699 env[1824]: time="2024-02-12T20:23:01.228603627Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 12 20:23:02.425673 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2561816149.mount: Deactivated successfully. Feb 12 20:23:05.658638 env[1824]: time="2024-02-12T20:23:05.658576851Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:05.699105 env[1824]: time="2024-02-12T20:23:05.699037785Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:05.730008 env[1824]: time="2024-02-12T20:23:05.729940367Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:05.745489 env[1824]: time="2024-02-12T20:23:05.745402798Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:05.747066 env[1824]: time="2024-02-12T20:23:05.747010122Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:ef245802824036d4a23ba6f8b3f04c055416f9dc73a54d546b1f98ad16f6b8cb\"" Feb 12 20:23:05.766399 env[1824]: time="2024-02-12T20:23:05.765498720Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 12 20:23:06.498159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2343799071.mount: Deactivated successfully. Feb 12 20:23:07.258406 env[1824]: time="2024-02-12T20:23:07.258340831Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:07.261449 env[1824]: time="2024-02-12T20:23:07.261392961Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:07.264673 env[1824]: time="2024-02-12T20:23:07.264603394Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:07.267607 env[1824]: time="2024-02-12T20:23:07.267524707Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:07.268809 env[1824]: time="2024-02-12T20:23:07.268758525Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:b19406328e70dd2f6a36d6dbe4e867b0684ced2fdeb2f02ecb54ead39ec0bac0\"" Feb 12 20:23:10.899114 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 12 20:23:10.899537 systemd[1]: Stopped kubelet.service. Feb 12 20:23:10.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:10.910491 systemd[1]: Started kubelet.service. Feb 12 20:23:10.911640 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 12 20:23:10.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:10.927198 kernel: audit: type=1130 audit(1707769390.899:177): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:10.927374 kernel: audit: type=1131 audit(1707769390.899:178): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:10.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:10.940570 kernel: audit: type=1130 audit(1707769390.910:179): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:10.911000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:10.954711 kernel: audit: type=1131 audit(1707769390.911:180): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:11.039701 kubelet[2439]: E0212 20:23:11.039622 2439 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 12 20:23:11.043793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 12 20:23:11.044204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 12 20:23:11.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 20:23:11.056603 kernel: audit: type=1131 audit(1707769391.044:181): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Feb 12 20:23:14.307272 systemd[1]: Stopped kubelet.service. Feb 12 20:23:14.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:14.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:14.325014 kernel: audit: type=1130 audit(1707769394.306:182): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:14.325153 kernel: audit: type=1131 audit(1707769394.306:183): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:14.346842 systemd[1]: Reloading. Feb 12 20:23:14.462407 /usr/lib/systemd/system-generators/torcx-generator[2472]: time="2024-02-12T20:23:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:23:14.467726 /usr/lib/systemd/system-generators/torcx-generator[2472]: time="2024-02-12T20:23:14Z" level=info msg="torcx already run" Feb 12 20:23:14.648796 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:23:14.649143 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:23:14.690957 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:23:14.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:14.922321 systemd[1]: Started kubelet.service. Feb 12 20:23:14.936595 kernel: audit: type=1130 audit(1707769394.921:184): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:15.022164 kubelet[2534]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:23:15.022164 kubelet[2534]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:23:15.022797 kubelet[2534]: I0212 20:23:15.022312 2534 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:23:15.025169 kubelet[2534]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:23:15.025169 kubelet[2534]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:23:16.400610 kubelet[2534]: I0212 20:23:16.400571 2534 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:23:16.401262 kubelet[2534]: I0212 20:23:16.401237 2534 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:23:16.401798 kubelet[2534]: I0212 20:23:16.401771 2534 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:23:16.410135 kubelet[2534]: E0212 20:23:16.410073 2534 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.16.195:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:16.410283 kubelet[2534]: I0212 20:23:16.410166 2534 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:23:16.411481 kubelet[2534]: W0212 20:23:16.411439 2534 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 20:23:16.412897 kubelet[2534]: I0212 20:23:16.412849 2534 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:23:16.413814 kubelet[2534]: I0212 20:23:16.413773 2534 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:23:16.413929 kubelet[2534]: I0212 20:23:16.413899 2534 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:23:16.414109 kubelet[2534]: I0212 20:23:16.413938 2534 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:23:16.414109 kubelet[2534]: I0212 20:23:16.413966 2534 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:23:16.414246 kubelet[2534]: I0212 20:23:16.414151 2534 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:23:16.427606 kubelet[2534]: I0212 20:23:16.427558 2534 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:23:16.427606 kubelet[2534]: I0212 20:23:16.427607 2534 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:23:16.427838 kubelet[2534]: I0212 20:23:16.427679 2534 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:23:16.427838 kubelet[2534]: I0212 20:23:16.427704 2534 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:23:16.431368 kubelet[2534]: I0212 20:23:16.431320 2534 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:23:16.432087 kubelet[2534]: W0212 20:23:16.432044 2534 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 12 20:23:16.432911 kubelet[2534]: I0212 20:23:16.432864 2534 server.go:1186] "Started kubelet" Feb 12 20:23:16.433141 kubelet[2534]: W0212 20:23:16.433067 2534 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.16.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-195&limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:16.433228 kubelet[2534]: E0212 20:23:16.433156 2534 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-195&limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:16.433322 kubelet[2534]: W0212 20:23:16.433266 2534 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.16.195:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:16.433393 kubelet[2534]: E0212 20:23:16.433331 2534 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.195:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:16.434000 audit[2534]: AVC avc: denied { mac_admin } for pid=2534 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:23:16.437774 kubelet[2534]: I0212 20:23:16.436375 2534 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 12 20:23:16.437774 kubelet[2534]: I0212 20:23:16.436438 2534 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 12 20:23:16.437774 kubelet[2534]: I0212 20:23:16.436570 2534 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:23:16.441809 kubelet[2534]: I0212 20:23:16.441773 2534 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:23:16.442970 kubelet[2534]: I0212 20:23:16.442941 2534 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:23:16.434000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:23:16.448010 kernel: audit: type=1400 audit(1707769396.434:185): avc: denied { mac_admin } for pid=2534 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:23:16.448186 kernel: audit: type=1401 audit(1707769396.434:185): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:23:16.449820 kubelet[2534]: E0212 20:23:16.448999 2534 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-16-195.17b33737c8a98124", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-16-195", UID:"ip-172-31-16-195", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-16-195"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 23, 16, 432830756, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 23, 16, 432830756, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.16.195:6443/api/v1/namespaces/default/events": dial tcp 172.31.16.195:6443: connect: connection refused'(may retry after sleeping) Feb 12 20:23:16.450520 kubelet[2534]: I0212 20:23:16.450484 2534 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:23:16.451353 kubelet[2534]: E0212 20:23:16.451292 2534 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:23:16.451353 kubelet[2534]: E0212 20:23:16.451351 2534 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:23:16.434000 audit[2534]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000f23140 a1=4000b3d368 a2=4000f23110 a3=25 items=0 ppid=1 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.453664 kubelet[2534]: I0212 20:23:16.453628 2534 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:23:16.459832 kubelet[2534]: E0212 20:23:16.459787 2534 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-195?timeout=10s": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:16.462769 kernel: audit: type=1300 audit(1707769396.434:185): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000f23140 a1=4000b3d368 a2=4000f23110 a3=25 items=0 ppid=1 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.465262 kubelet[2534]: W0212 20:23:16.463860 2534 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.16.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:16.465262 kubelet[2534]: E0212 20:23:16.463953 2534 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:16.434000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:23:16.476095 kernel: audit: type=1327 audit(1707769396.434:185): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:23:16.476213 kernel: audit: type=1400 audit(1707769396.434:186): avc: denied { mac_admin } for pid=2534 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:23:16.434000 audit[2534]: AVC avc: denied { mac_admin } for pid=2534 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:23:16.434000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:23:16.493277 kernel: audit: type=1401 audit(1707769396.434:186): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:23:16.434000 audit[2534]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b3b3c0 a1=4000b3d380 a2=4000f231d0 a3=25 items=0 ppid=1 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.507783 kernel: audit: type=1300 audit(1707769396.434:186): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b3b3c0 a1=4000b3d380 a2=4000f231d0 a3=25 items=0 ppid=1 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.434000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:23:16.520160 kernel: audit: type=1327 audit(1707769396.434:186): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:23:16.439000 audit[2544]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.527186 kernel: audit: type=1325 audit(1707769396.439:187): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2544 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.439000 audit[2544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd0d7d740 a2=0 a3=1 items=0 ppid=2534 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.538656 kernel: audit: type=1300 audit(1707769396.439:187): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd0d7d740 a2=0 a3=1 items=0 ppid=2534 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.439000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 20:23:16.451000 audit[2545]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.451000 audit[2545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7fe93e0 a2=0 a3=1 items=0 ppid=2534 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.451000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 20:23:16.456000 audit[2547]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.456000 audit[2547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd0aa7580 a2=0 a3=1 items=0 ppid=2534 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.456000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 20:23:16.459000 audit[2549]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.459000 audit[2549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffda9f5d40 a2=0 a3=1 items=0 ppid=2534 pid=2549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.459000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Feb 12 20:23:16.516000 audit[2557]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2557 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.516000 audit[2557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffffa9bf5f0 a2=0 a3=1 items=0 ppid=2534 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Feb 12 20:23:16.521000 audit[2558]: NETFILTER_CFG table=nat:31 family=2 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.521000 audit[2558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffea8719e0 a2=0 a3=1 items=0 ppid=2534 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.521000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 20:23:16.546000 audit[2561]: NETFILTER_CFG table=nat:32 family=2 entries=1 op=nft_register_rule pid=2561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.546000 audit[2561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc1c64c10 a2=0 a3=1 items=0 ppid=2534 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.546000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 20:23:16.554897 kubelet[2534]: I0212 20:23:16.554863 2534 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-195" Feb 12 20:23:16.555827 kubelet[2534]: E0212 20:23:16.555799 2534 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.16.195:6443/api/v1/nodes\": dial tcp 172.31.16.195:6443: connect: connection refused" node="ip-172-31-16-195" Feb 12 20:23:16.560000 audit[2564]: NETFILTER_CFG table=filter:33 family=2 entries=1 op=nft_register_rule pid=2564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.560000 audit[2564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffffc9d5a70 a2=0 a3=1 items=0 ppid=2534 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 20:23:16.564000 audit[2565]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=2565 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.564000 audit[2565]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc2acdc80 a2=0 a3=1 items=0 ppid=2534 pid=2565 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 20:23:16.567000 audit[2566]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=2566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.567000 audit[2566]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6fe2020 a2=0 a3=1 items=0 ppid=2534 pid=2566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 20:23:16.571000 audit[2568]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_rule pid=2568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.571000 audit[2568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcc77a290 a2=0 a3=1 items=0 ppid=2534 pid=2568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.571000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 20:23:16.576000 audit[2570]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_rule pid=2570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.576000 audit[2570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffffd57ef20 a2=0 a3=1 items=0 ppid=2534 pid=2570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.576000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 20:23:16.581000 audit[2572]: NETFILTER_CFG table=nat:38 family=2 entries=1 op=nft_register_rule pid=2572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.581000 audit[2572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffc741be90 a2=0 a3=1 items=0 ppid=2534 pid=2572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.581000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 20:23:16.593000 audit[2574]: NETFILTER_CFG table=nat:39 family=2 entries=1 op=nft_register_rule pid=2574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.593000 audit[2574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffc47efe70 a2=0 a3=1 items=0 ppid=2534 pid=2574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.593000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 20:23:16.602669 kubelet[2534]: I0212 20:23:16.602617 2534 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:23:16.602669 kubelet[2534]: I0212 20:23:16.602655 2534 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:23:16.602897 kubelet[2534]: I0212 20:23:16.602685 2534 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:23:16.605452 kubelet[2534]: I0212 20:23:16.605403 2534 policy_none.go:49] "None policy: Start" Feb 12 20:23:16.606762 kubelet[2534]: I0212 20:23:16.606715 2534 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:23:16.606762 kubelet[2534]: I0212 20:23:16.606763 2534 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:23:16.605000 audit[2576]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_rule pid=2576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.605000 audit[2576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffc9fd7dc0 a2=0 a3=1 items=0 ppid=2534 pid=2576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.605000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 20:23:16.608288 kubelet[2534]: I0212 20:23:16.608261 2534 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:23:16.613000 audit[2577]: NETFILTER_CFG table=mangle:41 family=10 entries=2 op=nft_register_chain pid=2577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.613000 audit[2577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd74b76a0 a2=0 a3=1 items=0 ppid=2534 pid=2577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.613000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Feb 12 20:23:16.619000 audit[2578]: NETFILTER_CFG table=mangle:42 family=2 entries=1 op=nft_register_chain pid=2578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.619000 audit[2578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc9e1f490 a2=0 a3=1 items=0 ppid=2534 pid=2578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.619000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 20:23:16.621623 kubelet[2534]: I0212 20:23:16.621562 2534 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:23:16.620000 audit[2534]: AVC avc: denied { mac_admin } for pid=2534 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:23:16.620000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:23:16.620000 audit[2534]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000de9170 a1=40010598f0 a2=4000de9140 a3=25 items=0 ppid=1 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.620000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:23:16.622031 kubelet[2534]: I0212 20:23:16.621727 2534 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 12 20:23:16.622031 kubelet[2534]: I0212 20:23:16.622026 2534 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:23:16.623000 audit[2579]: NETFILTER_CFG table=nat:43 family=10 entries=2 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.623000 audit[2579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc236afe0 a2=0 a3=1 items=0 ppid=2534 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.623000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Feb 12 20:23:16.627000 audit[2580]: NETFILTER_CFG table=nat:44 family=2 entries=1 op=nft_register_chain pid=2580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.627000 audit[2580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd269dcb0 a2=0 a3=1 items=0 ppid=2534 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.627000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 20:23:16.632987 kubelet[2534]: E0212 20:23:16.632941 2534 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-16-195\" not found" Feb 12 20:23:16.632000 audit[2582]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:16.632000 audit[2582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe6fb88b0 a2=0 a3=1 items=0 ppid=2534 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.632000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 20:23:16.634000 audit[2583]: NETFILTER_CFG table=nat:46 family=10 entries=1 op=nft_register_rule pid=2583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.634000 audit[2583]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffffc8f2c30 a2=0 a3=1 items=0 ppid=2534 pid=2583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.634000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Feb 12 20:23:16.636000 audit[2584]: NETFILTER_CFG table=filter:47 family=10 entries=2 op=nft_register_chain pid=2584 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.636000 audit[2584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffe5438b60 a2=0 a3=1 items=0 ppid=2534 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.636000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Feb 12 20:23:16.641000 audit[2586]: NETFILTER_CFG table=filter:48 family=10 entries=1 op=nft_register_rule pid=2586 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.641000 audit[2586]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffe34b3360 a2=0 a3=1 items=0 ppid=2534 pid=2586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.641000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Feb 12 20:23:16.643000 audit[2587]: NETFILTER_CFG table=nat:49 family=10 entries=1 op=nft_register_chain pid=2587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.643000 audit[2587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcf369260 a2=0 a3=1 items=0 ppid=2534 pid=2587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.643000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Feb 12 20:23:16.645000 audit[2588]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2588 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.645000 audit[2588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd90cd720 a2=0 a3=1 items=0 ppid=2534 pid=2588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.645000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Feb 12 20:23:16.649000 audit[2590]: NETFILTER_CFG table=nat:51 family=10 entries=1 op=nft_register_rule pid=2590 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.649000 audit[2590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffcb03b010 a2=0 a3=1 items=0 ppid=2534 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.649000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Feb 12 20:23:16.658000 audit[2592]: NETFILTER_CFG table=nat:52 family=10 entries=2 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.658000 audit[2592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc06a4fe0 a2=0 a3=1 items=0 ppid=2534 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.658000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Feb 12 20:23:16.661180 kubelet[2534]: E0212 20:23:16.661134 2534 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-195?timeout=10s": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:16.663000 audit[2594]: NETFILTER_CFG table=nat:53 family=10 entries=1 op=nft_register_rule pid=2594 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.663000 audit[2594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffff606fb0 a2=0 a3=1 items=0 ppid=2534 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Feb 12 20:23:16.668000 audit[2596]: NETFILTER_CFG table=nat:54 family=10 entries=1 op=nft_register_rule pid=2596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.668000 audit[2596]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffe7f986c0 a2=0 a3=1 items=0 ppid=2534 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.668000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Feb 12 20:23:16.675000 audit[2598]: NETFILTER_CFG table=nat:55 family=10 entries=1 op=nft_register_rule pid=2598 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.675000 audit[2598]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffe7dd8010 a2=0 a3=1 items=0 ppid=2534 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.675000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Feb 12 20:23:16.677535 kubelet[2534]: I0212 20:23:16.677473 2534 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:23:16.677745 kubelet[2534]: I0212 20:23:16.677723 2534 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:23:16.678137 kubelet[2534]: I0212 20:23:16.678093 2534 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:23:16.679116 kubelet[2534]: W0212 20:23:16.679056 2534 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.16.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:16.679348 kubelet[2534]: E0212 20:23:16.679322 2534 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:16.677000 audit[2599]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.677000 audit[2599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd5c7a9a0 a2=0 a3=1 items=0 ppid=2534 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.677000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Feb 12 20:23:16.680426 kubelet[2534]: E0212 20:23:16.680394 2534 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 12 20:23:16.681000 audit[2600]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=2600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.681000 audit[2600]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff0fce440 a2=0 a3=1 items=0 ppid=2534 pid=2600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Feb 12 20:23:16.683000 audit[2601]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=2601 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:16.683000 audit[2601]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffecc22f90 a2=0 a3=1 items=0 ppid=2534 pid=2601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:16.683000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Feb 12 20:23:16.758578 kubelet[2534]: I0212 20:23:16.758528 2534 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-195" Feb 12 20:23:16.759184 kubelet[2534]: E0212 20:23:16.759156 2534 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.16.195:6443/api/v1/nodes\": dial tcp 172.31.16.195:6443: connect: connection refused" node="ip-172-31-16-195" Feb 12 20:23:16.781523 kubelet[2534]: I0212 20:23:16.781490 2534 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:16.783776 kubelet[2534]: I0212 20:23:16.783737 2534 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:16.786089 kubelet[2534]: I0212 20:23:16.786055 2534 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:16.787190 kubelet[2534]: I0212 20:23:16.787159 2534 status_manager.go:698] "Failed to get status for pod" podUID=6f047c74c7f6e7ee925d0d6973c5ba34 pod="kube-system/kube-scheduler-ip-172-31-16-195" err="Get \"https://172.31.16.195:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ip-172-31-16-195\": dial tcp 172.31.16.195:6443: connect: connection refused" Feb 12 20:23:16.793853 kubelet[2534]: I0212 20:23:16.793804 2534 status_manager.go:698] "Failed to get status for pod" podUID=06b55d6fb5fbb3468c0de67004585f12 pod="kube-system/kube-apiserver-ip-172-31-16-195" err="Get \"https://172.31.16.195:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ip-172-31-16-195\": dial tcp 172.31.16.195:6443: connect: connection refused" Feb 12 20:23:16.807375 kubelet[2534]: I0212 20:23:16.807340 2534 status_manager.go:698] "Failed to get status for pod" podUID=429b7bb059f7f8fd07e975764c7e55ba pod="kube-system/kube-controller-manager-ip-172-31-16-195" err="Get \"https://172.31.16.195:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ip-172-31-16-195\": dial tcp 172.31.16.195:6443: connect: connection refused" Feb 12 20:23:16.854869 kubelet[2534]: I0212 20:23:16.854784 2534 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06b55d6fb5fbb3468c0de67004585f12-ca-certs\") pod \"kube-apiserver-ip-172-31-16-195\" (UID: \"06b55d6fb5fbb3468c0de67004585f12\") " pod="kube-system/kube-apiserver-ip-172-31-16-195" Feb 12 20:23:16.855092 kubelet[2534]: I0212 20:23:16.855071 2534 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06b55d6fb5fbb3468c0de67004585f12-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-195\" (UID: \"06b55d6fb5fbb3468c0de67004585f12\") " pod="kube-system/kube-apiserver-ip-172-31-16-195" Feb 12 20:23:16.855295 kubelet[2534]: I0212 20:23:16.855274 2534 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06b55d6fb5fbb3468c0de67004585f12-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-195\" (UID: \"06b55d6fb5fbb3468c0de67004585f12\") " pod="kube-system/kube-apiserver-ip-172-31-16-195" Feb 12 20:23:16.855489 kubelet[2534]: I0212 20:23:16.855469 2534 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/429b7bb059f7f8fd07e975764c7e55ba-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-195\" (UID: \"429b7bb059f7f8fd07e975764c7e55ba\") " pod="kube-system/kube-controller-manager-ip-172-31-16-195" Feb 12 20:23:16.855713 kubelet[2534]: I0212 20:23:16.855692 2534 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/429b7bb059f7f8fd07e975764c7e55ba-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-195\" (UID: \"429b7bb059f7f8fd07e975764c7e55ba\") " pod="kube-system/kube-controller-manager-ip-172-31-16-195" Feb 12 20:23:16.855898 kubelet[2534]: I0212 20:23:16.855878 2534 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f047c74c7f6e7ee925d0d6973c5ba34-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-195\" (UID: \"6f047c74c7f6e7ee925d0d6973c5ba34\") " pod="kube-system/kube-scheduler-ip-172-31-16-195" Feb 12 20:23:16.856101 kubelet[2534]: I0212 20:23:16.856081 2534 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/429b7bb059f7f8fd07e975764c7e55ba-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-195\" (UID: \"429b7bb059f7f8fd07e975764c7e55ba\") " pod="kube-system/kube-controller-manager-ip-172-31-16-195" Feb 12 20:23:16.856286 kubelet[2534]: I0212 20:23:16.856265 2534 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/429b7bb059f7f8fd07e975764c7e55ba-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-195\" (UID: \"429b7bb059f7f8fd07e975764c7e55ba\") " pod="kube-system/kube-controller-manager-ip-172-31-16-195" Feb 12 20:23:16.856445 kubelet[2534]: I0212 20:23:16.856424 2534 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/429b7bb059f7f8fd07e975764c7e55ba-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-195\" (UID: \"429b7bb059f7f8fd07e975764c7e55ba\") " pod="kube-system/kube-controller-manager-ip-172-31-16-195" Feb 12 20:23:17.062536 kubelet[2534]: E0212 20:23:17.062489 2534 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-195?timeout=10s": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:17.101931 env[1824]: time="2024-02-12T20:23:17.101859890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-195,Uid:06b55d6fb5fbb3468c0de67004585f12,Namespace:kube-system,Attempt:0,}" Feb 12 20:23:17.107527 env[1824]: time="2024-02-12T20:23:17.107435109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-195,Uid:6f047c74c7f6e7ee925d0d6973c5ba34,Namespace:kube-system,Attempt:0,}" Feb 12 20:23:17.109121 env[1824]: time="2024-02-12T20:23:17.109062664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-195,Uid:429b7bb059f7f8fd07e975764c7e55ba,Namespace:kube-system,Attempt:0,}" Feb 12 20:23:17.160943 kubelet[2534]: I0212 20:23:17.160888 2534 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-195" Feb 12 20:23:17.161365 kubelet[2534]: E0212 20:23:17.161329 2534 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.16.195:6443/api/v1/nodes\": dial tcp 172.31.16.195:6443: connect: connection refused" node="ip-172-31-16-195" Feb 12 20:23:17.388569 kubelet[2534]: W0212 20:23:17.388313 2534 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.16.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-195&limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:17.388569 kubelet[2534]: E0212 20:23:17.388412 2534 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.16.195:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-16-195&limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:17.626415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1621223264.mount: Deactivated successfully. Feb 12 20:23:17.635716 env[1824]: time="2024-02-12T20:23:17.635647155Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:17.639226 env[1824]: time="2024-02-12T20:23:17.639079434Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:17.642902 env[1824]: time="2024-02-12T20:23:17.642827504Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:17.645204 env[1824]: time="2024-02-12T20:23:17.645149318Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:17.648158 env[1824]: time="2024-02-12T20:23:17.648095573Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:17.652163 env[1824]: time="2024-02-12T20:23:17.652065567Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:17.657989 env[1824]: time="2024-02-12T20:23:17.657940426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:17.661220 env[1824]: time="2024-02-12T20:23:17.661159144Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:17.663455 env[1824]: time="2024-02-12T20:23:17.663409363Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:17.666631 env[1824]: time="2024-02-12T20:23:17.666568124Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:17.671191 env[1824]: time="2024-02-12T20:23:17.671117600Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:17.679341 kubelet[2534]: W0212 20:23:17.679186 2534 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.16.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:17.679341 kubelet[2534]: E0212 20:23:17.679285 2534 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.16.195:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:17.698046 env[1824]: time="2024-02-12T20:23:17.697967115Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:17.718051 kubelet[2534]: W0212 20:23:17.717338 2534 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.16.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:17.718051 kubelet[2534]: E0212 20:23:17.717419 2534 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.16.195:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:17.723077 env[1824]: time="2024-02-12T20:23:17.722842893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:23:17.723475 env[1824]: time="2024-02-12T20:23:17.723353983Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:23:17.725018 env[1824]: time="2024-02-12T20:23:17.723756547Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:23:17.725381 env[1824]: time="2024-02-12T20:23:17.724384370Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ab66f2d4efdff44373c33f4cba9a17add899f802bf53023ffbfb8c7e52c377a5 pid=2609 runtime=io.containerd.runc.v2 Feb 12 20:23:17.786596 env[1824]: time="2024-02-12T20:23:17.783426820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:23:17.786596 env[1824]: time="2024-02-12T20:23:17.783512958Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:23:17.786928 env[1824]: time="2024-02-12T20:23:17.783565919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:23:17.787392 env[1824]: time="2024-02-12T20:23:17.787276973Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f3217b85b9a9738b94ad0822ca26bf28004c5d12b617550f2a1cad13ef06f78b pid=2639 runtime=io.containerd.runc.v2 Feb 12 20:23:17.795594 env[1824]: time="2024-02-12T20:23:17.790253272Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:23:17.795594 env[1824]: time="2024-02-12T20:23:17.790416728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:23:17.795594 env[1824]: time="2024-02-12T20:23:17.790502926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:23:17.795594 env[1824]: time="2024-02-12T20:23:17.791175432Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c796e58187421bb2180b332e3dbc4f71a9041a756b652b9273ee387a507b6c08 pid=2661 runtime=io.containerd.runc.v2 Feb 12 20:23:17.866518 kubelet[2534]: E0212 20:23:17.866445 2534 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-195?timeout=10s": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:17.927698 env[1824]: time="2024-02-12T20:23:17.927641502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-16-195,Uid:06b55d6fb5fbb3468c0de67004585f12,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab66f2d4efdff44373c33f4cba9a17add899f802bf53023ffbfb8c7e52c377a5\"" Feb 12 20:23:17.934360 env[1824]: time="2024-02-12T20:23:17.934292800Z" level=info msg="CreateContainer within sandbox \"ab66f2d4efdff44373c33f4cba9a17add899f802bf53023ffbfb8c7e52c377a5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 12 20:23:17.966158 kubelet[2534]: I0212 20:23:17.965623 2534 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-195" Feb 12 20:23:17.966393 kubelet[2534]: E0212 20:23:17.966354 2534 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.16.195:6443/api/v1/nodes\": dial tcp 172.31.16.195:6443: connect: connection refused" node="ip-172-31-16-195" Feb 12 20:23:17.968102 env[1824]: time="2024-02-12T20:23:17.968017997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-16-195,Uid:6f047c74c7f6e7ee925d0d6973c5ba34,Namespace:kube-system,Attempt:0,} returns sandbox id \"f3217b85b9a9738b94ad0822ca26bf28004c5d12b617550f2a1cad13ef06f78b\"" Feb 12 20:23:17.972409 kubelet[2534]: E0212 20:23:17.972207 2534 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-16-195.17b33737c8a98124", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-16-195", UID:"ip-172-31-16-195", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-16-195"}, FirstTimestamp:time.Date(2024, time.February, 12, 20, 23, 16, 432830756, time.Local), LastTimestamp:time.Date(2024, time.February, 12, 20, 23, 16, 432830756, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://172.31.16.195:6443/api/v1/namespaces/default/events": dial tcp 172.31.16.195:6443: connect: connection refused'(may retry after sleeping) Feb 12 20:23:17.973216 env[1824]: time="2024-02-12T20:23:17.973156745Z" level=info msg="CreateContainer within sandbox \"f3217b85b9a9738b94ad0822ca26bf28004c5d12b617550f2a1cad13ef06f78b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 12 20:23:17.977611 env[1824]: time="2024-02-12T20:23:17.977499210Z" level=info msg="CreateContainer within sandbox \"ab66f2d4efdff44373c33f4cba9a17add899f802bf53023ffbfb8c7e52c377a5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6a4a14686d01b0f679d24142a40a1f00cb90a3888149029c143d67bf3e0ef866\"" Feb 12 20:23:17.981288 env[1824]: time="2024-02-12T20:23:17.981230014Z" level=info msg="StartContainer for \"6a4a14686d01b0f679d24142a40a1f00cb90a3888149029c143d67bf3e0ef866\"" Feb 12 20:23:17.992930 kubelet[2534]: W0212 20:23:17.992775 2534 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.16.195:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:17.992930 kubelet[2534]: E0212 20:23:17.992884 2534 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.16.195:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.16.195:6443: connect: connection refused Feb 12 20:23:17.994380 env[1824]: time="2024-02-12T20:23:17.994294365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-16-195,Uid:429b7bb059f7f8fd07e975764c7e55ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"c796e58187421bb2180b332e3dbc4f71a9041a756b652b9273ee387a507b6c08\"" Feb 12 20:23:18.001623 env[1824]: time="2024-02-12T20:23:18.001527995Z" level=info msg="CreateContainer within sandbox \"c796e58187421bb2180b332e3dbc4f71a9041a756b652b9273ee387a507b6c08\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 12 20:23:18.007903 env[1824]: time="2024-02-12T20:23:18.007823091Z" level=info msg="CreateContainer within sandbox \"f3217b85b9a9738b94ad0822ca26bf28004c5d12b617550f2a1cad13ef06f78b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b243f487fd3ecc2db60dc64698be67881952f61dfe615bb47ee6ef9eac33188b\"" Feb 12 20:23:18.008881 env[1824]: time="2024-02-12T20:23:18.008821450Z" level=info msg="StartContainer for \"b243f487fd3ecc2db60dc64698be67881952f61dfe615bb47ee6ef9eac33188b\"" Feb 12 20:23:18.025502 env[1824]: time="2024-02-12T20:23:18.025416798Z" level=info msg="CreateContainer within sandbox \"c796e58187421bb2180b332e3dbc4f71a9041a756b652b9273ee387a507b6c08\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"22d7566e29110cd8177ceb162e778efdd5171c2798eee4aed57f1eb22c16f88c\"" Feb 12 20:23:18.026374 env[1824]: time="2024-02-12T20:23:18.026317848Z" level=info msg="StartContainer for \"22d7566e29110cd8177ceb162e778efdd5171c2798eee4aed57f1eb22c16f88c\"" Feb 12 20:23:18.215300 env[1824]: time="2024-02-12T20:23:18.212894362Z" level=info msg="StartContainer for \"6a4a14686d01b0f679d24142a40a1f00cb90a3888149029c143d67bf3e0ef866\" returns successfully" Feb 12 20:23:18.223400 env[1824]: time="2024-02-12T20:23:18.223324572Z" level=info msg="StartContainer for \"b243f487fd3ecc2db60dc64698be67881952f61dfe615bb47ee6ef9eac33188b\" returns successfully" Feb 12 20:23:18.308558 env[1824]: time="2024-02-12T20:23:18.308471281Z" level=info msg="StartContainer for \"22d7566e29110cd8177ceb162e778efdd5171c2798eee4aed57f1eb22c16f88c\" returns successfully" Feb 12 20:23:19.568121 kubelet[2534]: I0212 20:23:19.568061 2534 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-195" Feb 12 20:23:19.689643 amazon-ssm-agent[1782]: 2024-02-12 20:23:19 INFO [MessagingDeliveryService] [Association] Schedule manager refreshed with 0 associations, 0 new associations associated Feb 12 20:23:23.559323 kubelet[2534]: E0212 20:23:23.559242 2534 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-16-195\" not found" node="ip-172-31-16-195" Feb 12 20:23:23.673154 kubelet[2534]: I0212 20:23:23.673075 2534 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-16-195" Feb 12 20:23:24.433636 kubelet[2534]: I0212 20:23:24.433593 2534 apiserver.go:52] "Watching apiserver" Feb 12 20:23:24.454297 kubelet[2534]: I0212 20:23:24.454230 2534 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:23:24.527525 kubelet[2534]: I0212 20:23:24.527476 2534 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:23:25.461910 update_engine[1802]: I0212 20:23:25.461855 1802 update_attempter.cc:509] Updating boot flags... Feb 12 20:23:26.593853 systemd[1]: Reloading. Feb 12 20:23:26.748262 /usr/lib/systemd/system-generators/torcx-generator[3045]: time="2024-02-12T20:23:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 12 20:23:26.752721 /usr/lib/systemd/system-generators/torcx-generator[3045]: time="2024-02-12T20:23:26Z" level=info msg="torcx already run" Feb 12 20:23:26.950286 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 12 20:23:26.950337 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 12 20:23:26.993203 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 12 20:23:27.243805 kubelet[2534]: I0212 20:23:27.243648 2534 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:23:27.250413 systemd[1]: Stopping kubelet.service... Feb 12 20:23:27.277785 kernel: kauditd_printk_skb: 101 callbacks suppressed Feb 12 20:23:27.277913 kernel: audit: type=1131 audit(1707769407.265:221): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:27.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:27.266482 systemd[1]: kubelet.service: Deactivated successfully. Feb 12 20:23:27.267329 systemd[1]: Stopped kubelet.service. Feb 12 20:23:27.280087 systemd[1]: Started kubelet.service. Feb 12 20:23:27.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:27.299180 kernel: audit: type=1130 audit(1707769407.280:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:27.462849 kubelet[3105]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:23:27.462849 kubelet[3105]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:23:27.463482 kubelet[3105]: I0212 20:23:27.462979 3105 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 12 20:23:27.465537 kubelet[3105]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 12 20:23:27.465537 kubelet[3105]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 12 20:23:27.472009 kubelet[3105]: I0212 20:23:27.471942 3105 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 12 20:23:27.472009 kubelet[3105]: I0212 20:23:27.471997 3105 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 12 20:23:27.472477 kubelet[3105]: I0212 20:23:27.472432 3105 server.go:836] "Client rotation is on, will bootstrap in background" Feb 12 20:23:27.475234 kubelet[3105]: I0212 20:23:27.475175 3105 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 12 20:23:27.477264 kubelet[3105]: I0212 20:23:27.477220 3105 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 12 20:23:27.480652 kubelet[3105]: W0212 20:23:27.480595 3105 machine.go:65] Cannot read vendor id correctly, set empty. Feb 12 20:23:27.482214 kubelet[3105]: I0212 20:23:27.482147 3105 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 12 20:23:27.486902 kubelet[3105]: I0212 20:23:27.486832 3105 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 12 20:23:27.487096 kubelet[3105]: I0212 20:23:27.487010 3105 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 12 20:23:27.487096 kubelet[3105]: I0212 20:23:27.487072 3105 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 12 20:23:27.487308 kubelet[3105]: I0212 20:23:27.487098 3105 container_manager_linux.go:308] "Creating device plugin manager" Feb 12 20:23:27.487308 kubelet[3105]: I0212 20:23:27.487161 3105 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:23:27.495213 kubelet[3105]: I0212 20:23:27.495063 3105 kubelet.go:398] "Attempting to sync node with API server" Feb 12 20:23:27.495213 kubelet[3105]: I0212 20:23:27.495116 3105 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 12 20:23:27.495213 kubelet[3105]: I0212 20:23:27.495166 3105 kubelet.go:297] "Adding apiserver pod source" Feb 12 20:23:27.495213 kubelet[3105]: I0212 20:23:27.495195 3105 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 12 20:23:27.515213 kubelet[3105]: I0212 20:23:27.506345 3105 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 12 20:23:27.515213 kubelet[3105]: I0212 20:23:27.509738 3105 server.go:1186] "Started kubelet" Feb 12 20:23:27.533000 audit[3105]: AVC avc: denied { mac_admin } for pid=3105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:23:27.543576 kernel: audit: type=1400 audit(1707769407.533:223): avc: denied { mac_admin } for pid=3105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:23:27.543703 kernel: audit: type=1401 audit(1707769407.533:223): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:23:27.533000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:23:27.543960 kubelet[3105]: I0212 20:23:27.543920 3105 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 12 20:23:27.545515 kubelet[3105]: I0212 20:23:27.545458 3105 server.go:451] "Adding debug handlers to kubelet server" Feb 12 20:23:27.533000 audit[3105]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4001024690 a1=4000927680 a2=4001024660 a3=25 items=0 ppid=1 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:27.559328 kernel: audit: type=1300 audit(1707769407.533:223): arch=c00000b7 syscall=5 success=no exit=-22 a0=4001024690 a1=4000927680 a2=4001024660 a3=25 items=0 ppid=1 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:27.573854 kernel: audit: type=1327 audit(1707769407.533:223): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:23:27.533000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:23:27.574335 kubelet[3105]: I0212 20:23:27.574287 3105 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Feb 12 20:23:27.574453 kubelet[3105]: I0212 20:23:27.574401 3105 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Feb 12 20:23:27.574453 kubelet[3105]: I0212 20:23:27.574450 3105 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 12 20:23:27.572000 audit[3105]: AVC avc: denied { mac_admin } for pid=3105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:23:27.572000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:23:27.593614 kernel: audit: type=1400 audit(1707769407.572:224): avc: denied { mac_admin } for pid=3105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:23:27.594802 kernel: audit: type=1401 audit(1707769407.572:224): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:23:27.604835 kubelet[3105]: I0212 20:23:27.604786 3105 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 12 20:23:27.607230 kubelet[3105]: E0212 20:23:27.607144 3105 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 12 20:23:27.607230 kubelet[3105]: E0212 20:23:27.607204 3105 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 12 20:23:27.572000 audit[3105]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40011003a0 a1=4000926480 a2=4000fe2270 a3=25 items=0 ppid=1 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:27.616882 kubelet[3105]: I0212 20:23:27.616842 3105 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 12 20:23:27.625933 kernel: audit: type=1300 audit(1707769407.572:224): arch=c00000b7 syscall=5 success=no exit=-22 a0=40011003a0 a1=4000926480 a2=4000fe2270 a3=25 items=0 ppid=1 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:27.572000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:23:27.654938 kernel: audit: type=1327 audit(1707769407.572:224): proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:23:27.714153 kubelet[3105]: I0212 20:23:27.714104 3105 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-16-195" Feb 12 20:23:27.740099 kubelet[3105]: I0212 20:23:27.740048 3105 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-16-195" Feb 12 20:23:27.740458 kubelet[3105]: I0212 20:23:27.740429 3105 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-16-195" Feb 12 20:23:27.795669 kubelet[3105]: I0212 20:23:27.795399 3105 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 12 20:23:28.001915 kubelet[3105]: I0212 20:23:28.001873 3105 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 12 20:23:28.002155 kubelet[3105]: I0212 20:23:28.002119 3105 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 12 20:23:28.002373 kubelet[3105]: I0212 20:23:28.002340 3105 kubelet.go:2113] "Starting kubelet main sync loop" Feb 12 20:23:28.002703 kubelet[3105]: E0212 20:23:28.002652 3105 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 12 20:23:28.065066 kubelet[3105]: I0212 20:23:28.064854 3105 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 12 20:23:28.065066 kubelet[3105]: I0212 20:23:28.064892 3105 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 12 20:23:28.065066 kubelet[3105]: I0212 20:23:28.064926 3105 state_mem.go:36] "Initialized new in-memory state store" Feb 12 20:23:28.065768 kubelet[3105]: I0212 20:23:28.065722 3105 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 12 20:23:28.065768 kubelet[3105]: I0212 20:23:28.065768 3105 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 12 20:23:28.065989 kubelet[3105]: I0212 20:23:28.065786 3105 policy_none.go:49] "None policy: Start" Feb 12 20:23:28.067902 kubelet[3105]: I0212 20:23:28.067864 3105 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 12 20:23:28.068189 kubelet[3105]: I0212 20:23:28.068159 3105 state_mem.go:35] "Initializing new in-memory state store" Feb 12 20:23:28.068672 kubelet[3105]: I0212 20:23:28.068641 3105 state_mem.go:75] "Updated machine memory state" Feb 12 20:23:28.080941 kubelet[3105]: I0212 20:23:28.080904 3105 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 12 20:23:28.080000 audit[3105]: AVC avc: denied { mac_admin } for pid=3105 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:23:28.080000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Feb 12 20:23:28.080000 audit[3105]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4001339d40 a1=4001195d58 a2=4001339d10 a3=25 items=0 ppid=1 pid=3105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:28.080000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Feb 12 20:23:28.081859 kubelet[3105]: I0212 20:23:28.081813 3105 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Feb 12 20:23:28.092892 kubelet[3105]: I0212 20:23:28.092852 3105 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 12 20:23:28.103383 kubelet[3105]: I0212 20:23:28.103219 3105 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:28.103383 kubelet[3105]: I0212 20:23:28.103348 3105 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:28.103675 kubelet[3105]: I0212 20:23:28.103412 3105 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:28.157589 kubelet[3105]: I0212 20:23:28.154997 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/06b55d6fb5fbb3468c0de67004585f12-k8s-certs\") pod \"kube-apiserver-ip-172-31-16-195\" (UID: \"06b55d6fb5fbb3468c0de67004585f12\") " pod="kube-system/kube-apiserver-ip-172-31-16-195" Feb 12 20:23:28.157589 kubelet[3105]: I0212 20:23:28.155075 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/429b7bb059f7f8fd07e975764c7e55ba-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-16-195\" (UID: \"429b7bb059f7f8fd07e975764c7e55ba\") " pod="kube-system/kube-controller-manager-ip-172-31-16-195" Feb 12 20:23:28.157589 kubelet[3105]: I0212 20:23:28.155128 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/429b7bb059f7f8fd07e975764c7e55ba-k8s-certs\") pod \"kube-controller-manager-ip-172-31-16-195\" (UID: \"429b7bb059f7f8fd07e975764c7e55ba\") " pod="kube-system/kube-controller-manager-ip-172-31-16-195" Feb 12 20:23:28.157589 kubelet[3105]: I0212 20:23:28.155173 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/429b7bb059f7f8fd07e975764c7e55ba-kubeconfig\") pod \"kube-controller-manager-ip-172-31-16-195\" (UID: \"429b7bb059f7f8fd07e975764c7e55ba\") " pod="kube-system/kube-controller-manager-ip-172-31-16-195" Feb 12 20:23:28.157589 kubelet[3105]: I0212 20:23:28.155221 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/06b55d6fb5fbb3468c0de67004585f12-ca-certs\") pod \"kube-apiserver-ip-172-31-16-195\" (UID: \"06b55d6fb5fbb3468c0de67004585f12\") " pod="kube-system/kube-apiserver-ip-172-31-16-195" Feb 12 20:23:28.158037 kubelet[3105]: I0212 20:23:28.155269 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/06b55d6fb5fbb3468c0de67004585f12-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-16-195\" (UID: \"06b55d6fb5fbb3468c0de67004585f12\") " pod="kube-system/kube-apiserver-ip-172-31-16-195" Feb 12 20:23:28.158037 kubelet[3105]: I0212 20:23:28.155311 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/429b7bb059f7f8fd07e975764c7e55ba-ca-certs\") pod \"kube-controller-manager-ip-172-31-16-195\" (UID: \"429b7bb059f7f8fd07e975764c7e55ba\") " pod="kube-system/kube-controller-manager-ip-172-31-16-195" Feb 12 20:23:28.158037 kubelet[3105]: I0212 20:23:28.155358 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/429b7bb059f7f8fd07e975764c7e55ba-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-16-195\" (UID: \"429b7bb059f7f8fd07e975764c7e55ba\") " pod="kube-system/kube-controller-manager-ip-172-31-16-195" Feb 12 20:23:28.158037 kubelet[3105]: I0212 20:23:28.155435 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6f047c74c7f6e7ee925d0d6973c5ba34-kubeconfig\") pod \"kube-scheduler-ip-172-31-16-195\" (UID: \"6f047c74c7f6e7ee925d0d6973c5ba34\") " pod="kube-system/kube-scheduler-ip-172-31-16-195" Feb 12 20:23:28.497906 kubelet[3105]: I0212 20:23:28.497832 3105 apiserver.go:52] "Watching apiserver" Feb 12 20:23:28.518403 kubelet[3105]: I0212 20:23:28.518302 3105 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 12 20:23:28.559223 kubelet[3105]: I0212 20:23:28.559158 3105 reconciler.go:41] "Reconciler: start to sync state" Feb 12 20:23:29.052324 kubelet[3105]: E0212 20:23:29.052248 3105 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ip-172-31-16-195\" already exists" pod="kube-system/kube-scheduler-ip-172-31-16-195" Feb 12 20:23:29.053804 kubelet[3105]: E0212 20:23:29.053744 3105 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-16-195\" already exists" pod="kube-system/kube-apiserver-ip-172-31-16-195" Feb 12 20:23:29.314112 kubelet[3105]: E0212 20:23:29.313951 3105 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ip-172-31-16-195\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-16-195" Feb 12 20:23:29.934883 kubelet[3105]: I0212 20:23:29.934800 3105 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-16-195" podStartSLOduration=1.934677356 pod.CreationTimestamp="2024-02-12 20:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:23:29.5376823 +0000 UTC m=+2.246362885" watchObservedRunningTime="2024-02-12 20:23:29.934677356 +0000 UTC m=+2.643357929" Feb 12 20:23:30.348429 kubelet[3105]: I0212 20:23:30.348283 3105 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-16-195" podStartSLOduration=2.348225154 pod.CreationTimestamp="2024-02-12 20:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:23:29.948898277 +0000 UTC m=+2.657578958" watchObservedRunningTime="2024-02-12 20:23:30.348225154 +0000 UTC m=+3.056905727" Feb 12 20:23:32.656825 kubelet[3105]: I0212 20:23:32.656778 3105 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-16-195" podStartSLOduration=4.656720113 pod.CreationTimestamp="2024-02-12 20:23:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:23:30.350821638 +0000 UTC m=+3.059502223" watchObservedRunningTime="2024-02-12 20:23:32.656720113 +0000 UTC m=+5.365400686" Feb 12 20:23:34.955589 sudo[2111]: pam_unix(sudo:session): session closed for user root Feb 12 20:23:34.955000 audit[2111]: USER_END pid=2111 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:23:34.958417 kernel: kauditd_printk_skb: 4 callbacks suppressed Feb 12 20:23:34.958610 kernel: audit: type=1106 audit(1707769414.955:226): pid=2111 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:23:34.955000 audit[2111]: CRED_DISP pid=2111 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:23:34.975751 kernel: audit: type=1104 audit(1707769414.955:227): pid=2111 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Feb 12 20:23:34.980593 sshd[2107]: pam_unix(sshd:session): session closed for user core Feb 12 20:23:34.982000 audit[2107]: USER_END pid=2107 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:23:34.986619 systemd[1]: sshd@6-172.31.16.195:22-147.75.109.163:41332.service: Deactivated successfully. Feb 12 20:23:34.988396 systemd[1]: session-7.scope: Deactivated successfully. Feb 12 20:23:34.998379 systemd-logind[1800]: Session 7 logged out. Waiting for processes to exit. Feb 12 20:23:35.000621 systemd-logind[1800]: Removed session 7. Feb 12 20:23:34.982000 audit[2107]: CRED_DISP pid=2107 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:23:35.010836 kernel: audit: type=1106 audit(1707769414.982:228): pid=2107 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:23:35.010966 kernel: audit: type=1104 audit(1707769414.982:229): pid=2107 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:23:34.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.195:22-147.75.109.163:41332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:35.020451 kernel: audit: type=1131 audit(1707769414.986:230): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.16.195:22-147.75.109.163:41332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:23:41.924088 kubelet[3105]: I0212 20:23:41.924056 3105 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 12 20:23:41.925444 env[1824]: time="2024-02-12T20:23:41.925385004Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 12 20:23:41.926430 kubelet[3105]: I0212 20:23:41.926400 3105 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 12 20:23:42.077882 kubelet[3105]: I0212 20:23:42.077835 3105 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:42.153584 kubelet[3105]: I0212 20:23:42.153520 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f1de7577-ae2a-4526-aa64-b1b28e47d63d-kube-proxy\") pod \"kube-proxy-n55l2\" (UID: \"f1de7577-ae2a-4526-aa64-b1b28e47d63d\") " pod="kube-system/kube-proxy-n55l2" Feb 12 20:23:42.153890 kubelet[3105]: I0212 20:23:42.153862 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f1de7577-ae2a-4526-aa64-b1b28e47d63d-lib-modules\") pod \"kube-proxy-n55l2\" (UID: \"f1de7577-ae2a-4526-aa64-b1b28e47d63d\") " pod="kube-system/kube-proxy-n55l2" Feb 12 20:23:42.154077 kubelet[3105]: I0212 20:23:42.154054 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6nzf\" (UniqueName: \"kubernetes.io/projected/f1de7577-ae2a-4526-aa64-b1b28e47d63d-kube-api-access-v6nzf\") pod \"kube-proxy-n55l2\" (UID: \"f1de7577-ae2a-4526-aa64-b1b28e47d63d\") " pod="kube-system/kube-proxy-n55l2" Feb 12 20:23:42.154266 kubelet[3105]: I0212 20:23:42.154243 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f1de7577-ae2a-4526-aa64-b1b28e47d63d-xtables-lock\") pod \"kube-proxy-n55l2\" (UID: \"f1de7577-ae2a-4526-aa64-b1b28e47d63d\") " pod="kube-system/kube-proxy-n55l2" Feb 12 20:23:42.270895 kubelet[3105]: I0212 20:23:42.270752 3105 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:42.306504 kubelet[3105]: W0212 20:23:42.306449 3105 reflector.go:424] object-"tigera-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-195" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-16-195' and this object Feb 12 20:23:42.306720 kubelet[3105]: E0212 20:23:42.306530 3105 reflector.go:140] object-"tigera-operator"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ip-172-31-16-195" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-16-195' and this object Feb 12 20:23:42.307011 kubelet[3105]: W0212 20:23:42.306984 3105 reflector.go:424] object-"tigera-operator"/"kubernetes-services-endpoint": failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-16-195" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-16-195' and this object Feb 12 20:23:42.307186 kubelet[3105]: E0212 20:23:42.307165 3105 reflector.go:140] object-"tigera-operator"/"kubernetes-services-endpoint": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kubernetes-services-endpoint" is forbidden: User "system:node:ip-172-31-16-195" cannot list resource "configmaps" in API group "" in the namespace "tigera-operator": no relationship found between node 'ip-172-31-16-195' and this object Feb 12 20:23:42.355262 kubelet[3105]: I0212 20:23:42.355222 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/1b2d202d-cab0-4981-8dab-efda4b3fbea3-var-lib-calico\") pod \"tigera-operator-cfc98749c-tfhmh\" (UID: \"1b2d202d-cab0-4981-8dab-efda4b3fbea3\") " pod="tigera-operator/tigera-operator-cfc98749c-tfhmh" Feb 12 20:23:42.355537 kubelet[3105]: I0212 20:23:42.355514 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xspq8\" (UniqueName: \"kubernetes.io/projected/1b2d202d-cab0-4981-8dab-efda4b3fbea3-kube-api-access-xspq8\") pod \"tigera-operator-cfc98749c-tfhmh\" (UID: \"1b2d202d-cab0-4981-8dab-efda4b3fbea3\") " pod="tigera-operator/tigera-operator-cfc98749c-tfhmh" Feb 12 20:23:42.389370 env[1824]: time="2024-02-12T20:23:42.388773139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n55l2,Uid:f1de7577-ae2a-4526-aa64-b1b28e47d63d,Namespace:kube-system,Attempt:0,}" Feb 12 20:23:42.461759 env[1824]: time="2024-02-12T20:23:42.458888789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:23:42.462066 env[1824]: time="2024-02-12T20:23:42.461716785Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:23:42.462066 env[1824]: time="2024-02-12T20:23:42.462020701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:23:42.465596 env[1824]: time="2024-02-12T20:23:42.465034873Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/74dd065dd630e134b1f738c9fbcb51b483702a9c0e2622b2151ef44589e21083 pid=3212 runtime=io.containerd.runc.v2 Feb 12 20:23:42.682890 env[1824]: time="2024-02-12T20:23:42.682828603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n55l2,Uid:f1de7577-ae2a-4526-aa64-b1b28e47d63d,Namespace:kube-system,Attempt:0,} returns sandbox id \"74dd065dd630e134b1f738c9fbcb51b483702a9c0e2622b2151ef44589e21083\"" Feb 12 20:23:42.695477 env[1824]: time="2024-02-12T20:23:42.695384596Z" level=info msg="CreateContainer within sandbox \"74dd065dd630e134b1f738c9fbcb51b483702a9c0e2622b2151ef44589e21083\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 12 20:23:42.722651 env[1824]: time="2024-02-12T20:23:42.722571355Z" level=info msg="CreateContainer within sandbox \"74dd065dd630e134b1f738c9fbcb51b483702a9c0e2622b2151ef44589e21083\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a60885e356ea413eb036c38efedf21538655468594bb5d24b09b8e8aa647ec33\"" Feb 12 20:23:42.723759 env[1824]: time="2024-02-12T20:23:42.723695164Z" level=info msg="StartContainer for \"a60885e356ea413eb036c38efedf21538655468594bb5d24b09b8e8aa647ec33\"" Feb 12 20:23:42.870135 env[1824]: time="2024-02-12T20:23:42.870029085Z" level=info msg="StartContainer for \"a60885e356ea413eb036c38efedf21538655468594bb5d24b09b8e8aa647ec33\" returns successfully" Feb 12 20:23:42.979000 audit[3303]: NETFILTER_CFG table=mangle:59 family=2 entries=1 op=nft_register_chain pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:42.979000 audit[3303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff3db6620 a2=0 a3=ffffb71fd6c0 items=0 ppid=3263 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:42.998765 kernel: audit: type=1325 audit(1707769422.979:231): table=mangle:59 family=2 entries=1 op=nft_register_chain pid=3303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:42.998949 kernel: audit: type=1300 audit(1707769422.979:231): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff3db6620 a2=0 a3=ffffb71fd6c0 items=0 ppid=3263 pid=3303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:42.979000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 20:23:43.004724 kernel: audit: type=1327 audit(1707769422.979:231): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 20:23:42.979000 audit[3304]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_chain pid=3304 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.012052 kernel: audit: type=1325 audit(1707769422.979:232): table=nat:60 family=2 entries=1 op=nft_register_chain pid=3304 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:42.979000 audit[3304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4419b80 a2=0 a3=ffffa03c26c0 items=0 ppid=3263 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.024589 kernel: audit: type=1300 audit(1707769422.979:232): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4419b80 a2=0 a3=ffffa03c26c0 items=0 ppid=3263 pid=3304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:42.979000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 20:23:43.030562 kernel: audit: type=1327 audit(1707769422.979:232): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 20:23:43.030677 kernel: audit: type=1325 audit(1707769422.984:233): table=filter:61 family=2 entries=1 op=nft_register_chain pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:42.984000 audit[3305]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_chain pid=3305 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:42.984000 audit[3305]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffda4a26a0 a2=0 a3=ffffa79746c0 items=0 ppid=3263 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.047838 kernel: audit: type=1300 audit(1707769422.984:233): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffda4a26a0 a2=0 a3=ffffa79746c0 items=0 ppid=3263 pid=3305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.047992 kernel: audit: type=1327 audit(1707769422.984:233): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 20:23:42.984000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 20:23:42.986000 audit[3306]: NETFILTER_CFG table=mangle:62 family=10 entries=1 op=nft_register_chain pid=3306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.059239 kernel: audit: type=1325 audit(1707769422.986:234): table=mangle:62 family=10 entries=1 op=nft_register_chain pid=3306 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:42.986000 audit[3306]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe4401130 a2=0 a3=ffff9d07c6c0 items=0 ppid=3263 pid=3306 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:42.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Feb 12 20:23:42.986000 audit[3307]: NETFILTER_CFG table=nat:63 family=10 entries=1 op=nft_register_chain pid=3307 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:42.986000 audit[3307]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0d2f7a0 a2=0 a3=ffffb1a9e6c0 items=0 ppid=3263 pid=3307 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:42.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Feb 12 20:23:42.986000 audit[3308]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_chain pid=3308 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:42.986000 audit[3308]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd377bef0 a2=0 a3=ffff8fa166c0 items=0 ppid=3263 pid=3308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:42.986000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Feb 12 20:23:43.081000 audit[3309]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3309 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.081000 audit[3309]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff16057b0 a2=0 a3=ffffa522f6c0 items=0 ppid=3263 pid=3309 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.081000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 20:23:43.109000 audit[3311]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3311 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.109000 audit[3311]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd7924110 a2=0 a3=ffff96da16c0 items=0 ppid=3263 pid=3311 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.109000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Feb 12 20:23:43.117000 audit[3314]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3314 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.117000 audit[3314]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffab4d850 a2=0 a3=ffff99d866c0 items=0 ppid=3263 pid=3314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.117000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Feb 12 20:23:43.120000 audit[3315]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.120000 audit[3315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc64c2d40 a2=0 a3=ffff91d9d6c0 items=0 ppid=3263 pid=3315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.120000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 20:23:43.128000 audit[3317]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.128000 audit[3317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcc2638e0 a2=0 a3=ffff8401d6c0 items=0 ppid=3263 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.128000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 20:23:43.130000 audit[3318]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3318 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.130000 audit[3318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef1b3600 a2=0 a3=ffffa5cd96c0 items=0 ppid=3263 pid=3318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.130000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 20:23:43.136000 audit[3320]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3320 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.136000 audit[3320]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffebf51a00 a2=0 a3=ffffb55d06c0 items=0 ppid=3263 pid=3320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.136000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 20:23:43.144000 audit[3323]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.144000 audit[3323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffef2c1f90 a2=0 a3=ffff880336c0 items=0 ppid=3263 pid=3323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.144000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Feb 12 20:23:43.147000 audit[3324]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_chain pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.147000 audit[3324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe1a4a730 a2=0 a3=ffffaab6e6c0 items=0 ppid=3263 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.147000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 20:23:43.154000 audit[3326]: NETFILTER_CFG table=filter:74 family=2 entries=1 op=nft_register_rule pid=3326 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.154000 audit[3326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc7c78830 a2=0 a3=ffff991d06c0 items=0 ppid=3263 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.154000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 20:23:43.157000 audit[3327]: NETFILTER_CFG table=filter:75 family=2 entries=1 op=nft_register_chain pid=3327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.157000 audit[3327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdb1202c0 a2=0 a3=ffff7f7ec6c0 items=0 ppid=3263 pid=3327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.157000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 20:23:43.164000 audit[3329]: NETFILTER_CFG table=filter:76 family=2 entries=1 op=nft_register_rule pid=3329 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.164000 audit[3329]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc806caa0 a2=0 a3=ffffb27516c0 items=0 ppid=3263 pid=3329 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.164000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 20:23:43.175000 audit[3332]: NETFILTER_CFG table=filter:77 family=2 entries=1 op=nft_register_rule pid=3332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.175000 audit[3332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff5a106c0 a2=0 a3=ffffa767a6c0 items=0 ppid=3263 pid=3332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.175000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 20:23:43.183000 audit[3335]: NETFILTER_CFG table=filter:78 family=2 entries=1 op=nft_register_rule pid=3335 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.183000 audit[3335]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff1e3b850 a2=0 a3=ffff954056c0 items=0 ppid=3263 pid=3335 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.183000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 20:23:43.185000 audit[3336]: NETFILTER_CFG table=nat:79 family=2 entries=1 op=nft_register_chain pid=3336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.185000 audit[3336]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe9d86910 a2=0 a3=ffff972376c0 items=0 ppid=3263 pid=3336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.185000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 20:23:43.190000 audit[3338]: NETFILTER_CFG table=nat:80 family=2 entries=1 op=nft_register_rule pid=3338 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.190000 audit[3338]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe0ba2e30 a2=0 a3=ffffb396e6c0 items=0 ppid=3263 pid=3338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.190000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 20:23:43.197000 audit[3341]: NETFILTER_CFG table=nat:81 family=2 entries=1 op=nft_register_rule pid=3341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Feb 12 20:23:43.197000 audit[3341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdd30cd00 a2=0 a3=ffffa09c46c0 items=0 ppid=3263 pid=3341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.197000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 20:23:43.222000 audit[3345]: NETFILTER_CFG table=filter:82 family=2 entries=6 op=nft_register_rule pid=3345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:43.222000 audit[3345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffd1d58dc0 a2=0 a3=ffffaa5616c0 items=0 ppid=3263 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.222000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:43.236000 audit[3345]: NETFILTER_CFG table=nat:83 family=2 entries=17 op=nft_register_chain pid=3345 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:43.236000 audit[3345]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd1d58dc0 a2=0 a3=ffffaa5616c0 items=0 ppid=3263 pid=3345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.236000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:43.241000 audit[3350]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3350 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.241000 audit[3350]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe3fe55b0 a2=0 a3=ffff9bf146c0 items=0 ppid=3263 pid=3350 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.241000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Feb 12 20:23:43.246000 audit[3352]: NETFILTER_CFG table=filter:85 family=10 entries=2 op=nft_register_chain pid=3352 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.246000 audit[3352]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcf785ea0 a2=0 a3=ffffbaaf36c0 items=0 ppid=3263 pid=3352 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.246000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Feb 12 20:23:43.254000 audit[3355]: NETFILTER_CFG table=filter:86 family=10 entries=2 op=nft_register_chain pid=3355 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.254000 audit[3355]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffda3e53b0 a2=0 a3=ffff82b876c0 items=0 ppid=3263 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.254000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Feb 12 20:23:43.257000 audit[3356]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_chain pid=3356 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.257000 audit[3356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5c96e30 a2=0 a3=ffffa40946c0 items=0 ppid=3263 pid=3356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.257000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Feb 12 20:23:43.262000 audit[3358]: NETFILTER_CFG table=filter:88 family=10 entries=1 op=nft_register_rule pid=3358 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.262000 audit[3358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc29b3510 a2=0 a3=ffff882136c0 items=0 ppid=3263 pid=3358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.262000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Feb 12 20:23:43.265000 audit[3359]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3359 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.265000 audit[3359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5d496d0 a2=0 a3=ffffb5e316c0 items=0 ppid=3263 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.265000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Feb 12 20:23:43.271000 audit[3361]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3361 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.271000 audit[3361]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe5145da0 a2=0 a3=ffff909f56c0 items=0 ppid=3263 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.271000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Feb 12 20:23:43.279000 audit[3364]: NETFILTER_CFG table=filter:91 family=10 entries=2 op=nft_register_chain pid=3364 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.279000 audit[3364]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc3d6f200 a2=0 a3=ffffa02e76c0 items=0 ppid=3263 pid=3364 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.279000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Feb 12 20:23:43.282000 audit[3365]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_chain pid=3365 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.282000 audit[3365]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdb4bd5d0 a2=0 a3=ffffaac416c0 items=0 ppid=3263 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.282000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Feb 12 20:23:43.286000 audit[3367]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3367 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.286000 audit[3367]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff9968ac0 a2=0 a3=ffff96c186c0 items=0 ppid=3263 pid=3367 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.286000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Feb 12 20:23:43.289000 audit[3368]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_chain pid=3368 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.289000 audit[3368]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcdb7a990 a2=0 a3=ffff939576c0 items=0 ppid=3263 pid=3368 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.289000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Feb 12 20:23:43.294000 audit[3370]: NETFILTER_CFG table=filter:95 family=10 entries=1 op=nft_register_rule pid=3370 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.294000 audit[3370]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd4f22550 a2=0 a3=ffff96ecc6c0 items=0 ppid=3263 pid=3370 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Feb 12 20:23:43.302000 audit[3373]: NETFILTER_CFG table=filter:96 family=10 entries=1 op=nft_register_rule pid=3373 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.302000 audit[3373]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc4ebefe0 a2=0 a3=ffff853fe6c0 items=0 ppid=3263 pid=3373 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.302000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Feb 12 20:23:43.311000 audit[3376]: NETFILTER_CFG table=filter:97 family=10 entries=1 op=nft_register_rule pid=3376 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.311000 audit[3376]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd9e64fe0 a2=0 a3=ffffa7a8b6c0 items=0 ppid=3263 pid=3376 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.311000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Feb 12 20:23:43.313000 audit[3377]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3377 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.313000 audit[3377]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffee33cec0 a2=0 a3=ffffa3b936c0 items=0 ppid=3263 pid=3377 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.313000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Feb 12 20:23:43.318000 audit[3379]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3379 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.318000 audit[3379]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffff0697550 a2=0 a3=ffffbf6386c0 items=0 ppid=3263 pid=3379 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 20:23:43.326000 audit[3382]: NETFILTER_CFG table=nat:100 family=10 entries=2 op=nft_register_chain pid=3382 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Feb 12 20:23:43.326000 audit[3382]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe9710ea0 a2=0 a3=ffff97b646c0 items=0 ppid=3263 pid=3382 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.326000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Feb 12 20:23:43.344125 systemd[1]: run-containerd-runc-k8s.io-74dd065dd630e134b1f738c9fbcb51b483702a9c0e2622b2151ef44589e21083-runc.gasRw5.mount: Deactivated successfully. Feb 12 20:23:43.346000 audit[3386]: NETFILTER_CFG table=filter:101 family=10 entries=3 op=nft_register_rule pid=3386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 20:23:43.346000 audit[3386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffc9cf3d90 a2=0 a3=ffffa386c6c0 items=0 ppid=3263 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.346000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:43.347000 audit[3386]: NETFILTER_CFG table=nat:102 family=10 entries=10 op=nft_register_chain pid=3386 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Feb 12 20:23:43.347000 audit[3386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffc9cf3d90 a2=0 a3=ffffa386c6c0 items=0 ppid=3263 pid=3386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:43.347000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:43.542818 kubelet[3105]: E0212 20:23:43.541818 3105 projected.go:292] Couldn't get configMap tigera-operator/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:23:43.542818 kubelet[3105]: E0212 20:23:43.542297 3105 projected.go:198] Error preparing data for projected volume kube-api-access-xspq8 for pod tigera-operator/tigera-operator-cfc98749c-tfhmh: failed to sync configmap cache: timed out waiting for the condition Feb 12 20:23:43.542818 kubelet[3105]: E0212 20:23:43.542428 3105 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/1b2d202d-cab0-4981-8dab-efda4b3fbea3-kube-api-access-xspq8 podName:1b2d202d-cab0-4981-8dab-efda4b3fbea3 nodeName:}" failed. No retries permitted until 2024-02-12 20:23:44.042393812 +0000 UTC m=+16.751074373 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xspq8" (UniqueName: "kubernetes.io/projected/1b2d202d-cab0-4981-8dab-efda4b3fbea3-kube-api-access-xspq8") pod "tigera-operator-cfc98749c-tfhmh" (UID: "1b2d202d-cab0-4981-8dab-efda4b3fbea3") : failed to sync configmap cache: timed out waiting for the condition Feb 12 20:23:44.083492 env[1824]: time="2024-02-12T20:23:44.083428604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-tfhmh,Uid:1b2d202d-cab0-4981-8dab-efda4b3fbea3,Namespace:tigera-operator,Attempt:0,}" Feb 12 20:23:44.113695 env[1824]: time="2024-02-12T20:23:44.113533158Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:23:44.113947 env[1824]: time="2024-02-12T20:23:44.113654063Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:23:44.113947 env[1824]: time="2024-02-12T20:23:44.113681421Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:23:44.114219 env[1824]: time="2024-02-12T20:23:44.113915899Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/751a4b8ac067d6cb50053a0a6ce7b131cab6b7f71d3bdad9fc0ba43506c30a66 pid=3396 runtime=io.containerd.runc.v2 Feb 12 20:23:44.226746 env[1824]: time="2024-02-12T20:23:44.226669184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-cfc98749c-tfhmh,Uid:1b2d202d-cab0-4981-8dab-efda4b3fbea3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"751a4b8ac067d6cb50053a0a6ce7b131cab6b7f71d3bdad9fc0ba43506c30a66\"" Feb 12 20:23:44.232229 env[1824]: time="2024-02-12T20:23:44.229347194Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\"" Feb 12 20:23:45.558863 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount864016777.mount: Deactivated successfully. Feb 12 20:23:46.689683 env[1824]: time="2024-02-12T20:23:46.689595330Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:46.693181 env[1824]: time="2024-02-12T20:23:46.693096549Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:46.696830 env[1824]: time="2024-02-12T20:23:46.696765015Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.32.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:46.700163 env[1824]: time="2024-02-12T20:23:46.700096492Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:715ac9a30f8a9579e44258af20de354715429e11836b493918e9e1a696e9b028,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:46.701961 env[1824]: time="2024-02-12T20:23:46.701879842Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.32.3\" returns image reference \"sha256:c7a10ec867a90652f951a6ba5a12efb94165e0a1c9b72167810d1065e57d768f\"" Feb 12 20:23:46.710799 env[1824]: time="2024-02-12T20:23:46.710737494Z" level=info msg="CreateContainer within sandbox \"751a4b8ac067d6cb50053a0a6ce7b131cab6b7f71d3bdad9fc0ba43506c30a66\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Feb 12 20:23:46.736179 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2935884635.mount: Deactivated successfully. Feb 12 20:23:46.749675 env[1824]: time="2024-02-12T20:23:46.749608221Z" level=info msg="CreateContainer within sandbox \"751a4b8ac067d6cb50053a0a6ce7b131cab6b7f71d3bdad9fc0ba43506c30a66\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a0f4bd3c49044ee4d70c0c9f510f75edabcd08a81d5e1230239308a7ac14d3e2\"" Feb 12 20:23:46.750916 env[1824]: time="2024-02-12T20:23:46.750847858Z" level=info msg="StartContainer for \"a0f4bd3c49044ee4d70c0c9f510f75edabcd08a81d5e1230239308a7ac14d3e2\"" Feb 12 20:23:46.871031 env[1824]: time="2024-02-12T20:23:46.870960840Z" level=info msg="StartContainer for \"a0f4bd3c49044ee4d70c0c9f510f75edabcd08a81d5e1230239308a7ac14d3e2\" returns successfully" Feb 12 20:23:47.114831 kubelet[3105]: I0212 20:23:47.114154 3105 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-n55l2" podStartSLOduration=5.114063996 pod.CreationTimestamp="2024-02-12 20:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:23:43.098233834 +0000 UTC m=+15.806914419" watchObservedRunningTime="2024-02-12 20:23:47.114063996 +0000 UTC m=+19.822744569" Feb 12 20:23:47.727376 systemd[1]: run-containerd-runc-k8s.io-a0f4bd3c49044ee4d70c0c9f510f75edabcd08a81d5e1230239308a7ac14d3e2-runc.8aIDzQ.mount: Deactivated successfully. Feb 12 20:23:48.025506 kubelet[3105]: I0212 20:23:48.025209 3105 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-cfc98749c-tfhmh" podStartSLOduration=-9.223372030829622e+09 pod.CreationTimestamp="2024-02-12 20:23:42 +0000 UTC" firstStartedPulling="2024-02-12 20:23:44.228611914 +0000 UTC m=+16.937292487" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:23:47.115583531 +0000 UTC m=+19.824264176" watchObservedRunningTime="2024-02-12 20:23:48.02515261 +0000 UTC m=+20.733833219" Feb 12 20:23:51.308000 audit[3492]: NETFILTER_CFG table=filter:103 family=2 entries=13 op=nft_register_rule pid=3492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:51.311768 kernel: kauditd_printk_skb: 122 callbacks suppressed Feb 12 20:23:51.311918 kernel: audit: type=1325 audit(1707769431.308:275): table=filter:103 family=2 entries=13 op=nft_register_rule pid=3492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:51.308000 audit[3492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffd68a46b0 a2=0 a3=ffffbe5296c0 items=0 ppid=3263 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:51.329724 kernel: audit: type=1300 audit(1707769431.308:275): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffd68a46b0 a2=0 a3=ffffbe5296c0 items=0 ppid=3263 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:51.308000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:51.335432 kernel: audit: type=1327 audit(1707769431.308:275): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:51.310000 audit[3492]: NETFILTER_CFG table=nat:104 family=2 entries=20 op=nft_register_rule pid=3492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:51.345299 kernel: audit: type=1325 audit(1707769431.310:276): table=nat:104 family=2 entries=20 op=nft_register_rule pid=3492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:51.310000 audit[3492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd68a46b0 a2=0 a3=ffffbe5296c0 items=0 ppid=3263 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:51.358840 kernel: audit: type=1300 audit(1707769431.310:276): arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffd68a46b0 a2=0 a3=ffffbe5296c0 items=0 ppid=3263 pid=3492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:51.310000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:51.368196 kernel: audit: type=1327 audit(1707769431.310:276): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:51.410247 kubelet[3105]: I0212 20:23:51.410193 3105 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:51.522310 kubelet[3105]: I0212 20:23:51.522241 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qfcgd\" (UniqueName: \"kubernetes.io/projected/1328708f-765f-4deb-b16f-b1c21062f50b-kube-api-access-qfcgd\") pod \"calico-typha-6cddcbdd45-8pckb\" (UID: \"1328708f-765f-4deb-b16f-b1c21062f50b\") " pod="calico-system/calico-typha-6cddcbdd45-8pckb" Feb 12 20:23:51.522478 kubelet[3105]: I0212 20:23:51.522333 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1328708f-765f-4deb-b16f-b1c21062f50b-tigera-ca-bundle\") pod \"calico-typha-6cddcbdd45-8pckb\" (UID: \"1328708f-765f-4deb-b16f-b1c21062f50b\") " pod="calico-system/calico-typha-6cddcbdd45-8pckb" Feb 12 20:23:51.522478 kubelet[3105]: I0212 20:23:51.522385 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/1328708f-765f-4deb-b16f-b1c21062f50b-typha-certs\") pod \"calico-typha-6cddcbdd45-8pckb\" (UID: \"1328708f-765f-4deb-b16f-b1c21062f50b\") " pod="calico-system/calico-typha-6cddcbdd45-8pckb" Feb 12 20:23:51.595097 kubelet[3105]: I0212 20:23:51.594941 3105 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:51.712000 audit[3520]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=3520 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:51.719618 kernel: audit: type=1325 audit(1707769431.712:277): table=filter:105 family=2 entries=14 op=nft_register_rule pid=3520 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:51.719724 kernel: audit: type=1300 audit(1707769431.712:277): arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffe97a3cb0 a2=0 a3=ffff86b486c0 items=0 ppid=3263 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:51.712000 audit[3520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffe97a3cb0 a2=0 a3=ffff86b486c0 items=0 ppid=3263 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:51.723415 kubelet[3105]: I0212 20:23:51.723373 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e4161c71-2662-43e6-b0d8-4931a4e717e0-lib-modules\") pod \"calico-node-57b9r\" (UID: \"e4161c71-2662-43e6-b0d8-4931a4e717e0\") " pod="calico-system/calico-node-57b9r" Feb 12 20:23:51.723724 kubelet[3105]: I0212 20:23:51.723700 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e4161c71-2662-43e6-b0d8-4931a4e717e0-var-run-calico\") pod \"calico-node-57b9r\" (UID: \"e4161c71-2662-43e6-b0d8-4931a4e717e0\") " pod="calico-system/calico-node-57b9r" Feb 12 20:23:51.723900 kubelet[3105]: I0212 20:23:51.723863 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e4161c71-2662-43e6-b0d8-4931a4e717e0-var-lib-calico\") pod \"calico-node-57b9r\" (UID: \"e4161c71-2662-43e6-b0d8-4931a4e717e0\") " pod="calico-system/calico-node-57b9r" Feb 12 20:23:51.724070 kubelet[3105]: I0212 20:23:51.724037 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e4161c71-2662-43e6-b0d8-4931a4e717e0-xtables-lock\") pod \"calico-node-57b9r\" (UID: \"e4161c71-2662-43e6-b0d8-4931a4e717e0\") " pod="calico-system/calico-node-57b9r" Feb 12 20:23:51.724217 kubelet[3105]: I0212 20:23:51.724197 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e4161c71-2662-43e6-b0d8-4931a4e717e0-policysync\") pod \"calico-node-57b9r\" (UID: \"e4161c71-2662-43e6-b0d8-4931a4e717e0\") " pod="calico-system/calico-node-57b9r" Feb 12 20:23:51.724442 kubelet[3105]: I0212 20:23:51.724409 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e4161c71-2662-43e6-b0d8-4931a4e717e0-tigera-ca-bundle\") pod \"calico-node-57b9r\" (UID: \"e4161c71-2662-43e6-b0d8-4931a4e717e0\") " pod="calico-system/calico-node-57b9r" Feb 12 20:23:51.724628 kubelet[3105]: I0212 20:23:51.724606 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e4161c71-2662-43e6-b0d8-4931a4e717e0-node-certs\") pod \"calico-node-57b9r\" (UID: \"e4161c71-2662-43e6-b0d8-4931a4e717e0\") " pod="calico-system/calico-node-57b9r" Feb 12 20:23:51.724798 kubelet[3105]: I0212 20:23:51.724764 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e4161c71-2662-43e6-b0d8-4931a4e717e0-cni-bin-dir\") pod \"calico-node-57b9r\" (UID: \"e4161c71-2662-43e6-b0d8-4931a4e717e0\") " pod="calico-system/calico-node-57b9r" Feb 12 20:23:51.724962 kubelet[3105]: I0212 20:23:51.724929 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rlxrv\" (UniqueName: \"kubernetes.io/projected/e4161c71-2662-43e6-b0d8-4931a4e717e0-kube-api-access-rlxrv\") pod \"calico-node-57b9r\" (UID: \"e4161c71-2662-43e6-b0d8-4931a4e717e0\") " pod="calico-system/calico-node-57b9r" Feb 12 20:23:51.725114 kubelet[3105]: I0212 20:23:51.725093 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e4161c71-2662-43e6-b0d8-4931a4e717e0-cni-net-dir\") pod \"calico-node-57b9r\" (UID: \"e4161c71-2662-43e6-b0d8-4931a4e717e0\") " pod="calico-system/calico-node-57b9r" Feb 12 20:23:51.725329 kubelet[3105]: I0212 20:23:51.725305 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e4161c71-2662-43e6-b0d8-4931a4e717e0-cni-log-dir\") pod \"calico-node-57b9r\" (UID: \"e4161c71-2662-43e6-b0d8-4931a4e717e0\") " pod="calico-system/calico-node-57b9r" Feb 12 20:23:51.725495 kubelet[3105]: I0212 20:23:51.725461 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e4161c71-2662-43e6-b0d8-4931a4e717e0-flexvol-driver-host\") pod \"calico-node-57b9r\" (UID: \"e4161c71-2662-43e6-b0d8-4931a4e717e0\") " pod="calico-system/calico-node-57b9r" Feb 12 20:23:51.712000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:51.737073 kernel: audit: type=1327 audit(1707769431.712:277): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:51.719000 audit[3520]: NETFILTER_CFG table=nat:106 family=2 entries=20 op=nft_register_rule pid=3520 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:51.743312 kernel: audit: type=1325 audit(1707769431.719:278): table=nat:106 family=2 entries=20 op=nft_register_rule pid=3520 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:51.719000 audit[3520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffe97a3cb0 a2=0 a3=ffff86b486c0 items=0 ppid=3263 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:51.719000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:51.769633 env[1824]: time="2024-02-12T20:23:51.768872051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cddcbdd45-8pckb,Uid:1328708f-765f-4deb-b16f-b1c21062f50b,Namespace:calico-system,Attempt:0,}" Feb 12 20:23:51.825629 kubelet[3105]: I0212 20:23:51.825115 3105 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:23:51.827928 kubelet[3105]: E0212 20:23:51.827831 3105 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpmmd" podUID=8c257656-1c37-42c0-80d9-a7f2f0b7582d Feb 12 20:23:51.829442 kubelet[3105]: E0212 20:23:51.829404 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.829758 kubelet[3105]: W0212 20:23:51.829717 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.829959 kubelet[3105]: E0212 20:23:51.829932 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.830649 kubelet[3105]: E0212 20:23:51.830617 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.830855 kubelet[3105]: W0212 20:23:51.830823 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.831520 kubelet[3105]: E0212 20:23:51.831487 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.832757 kubelet[3105]: W0212 20:23:51.832666 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.834670 env[1824]: time="2024-02-12T20:23:51.834502283Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:23:51.835009 kubelet[3105]: E0212 20:23:51.834977 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.835244 kubelet[3105]: W0212 20:23:51.835205 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.835436 kubelet[3105]: E0212 20:23:51.835411 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.835734 kubelet[3105]: E0212 20:23:51.835684 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.836480 env[1824]: time="2024-02-12T20:23:51.836406108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:23:51.836831 env[1824]: time="2024-02-12T20:23:51.836732444Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:23:51.837078 kubelet[3105]: E0212 20:23:51.837021 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.837413 kubelet[3105]: E0212 20:23:51.837384 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.838911 env[1824]: time="2024-02-12T20:23:51.838783214Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ef69d7e0142d17323e8b328b4b122a268d8b2b96786b326afbb0d9e1bb9f3d17 pid=3529 runtime=io.containerd.runc.v2 Feb 12 20:23:51.839395 kubelet[3105]: W0212 20:23:51.839320 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.840136 kubelet[3105]: E0212 20:23:51.840043 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.844189 kubelet[3105]: E0212 20:23:51.844150 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.844426 kubelet[3105]: W0212 20:23:51.844395 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.844678 kubelet[3105]: E0212 20:23:51.844650 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.847291 kubelet[3105]: E0212 20:23:51.845447 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.847534 kubelet[3105]: W0212 20:23:51.847496 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.847715 kubelet[3105]: E0212 20:23:51.847687 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.859933 kubelet[3105]: E0212 20:23:51.859892 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.860129 kubelet[3105]: W0212 20:23:51.860101 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.860310 kubelet[3105]: E0212 20:23:51.860288 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.861014 kubelet[3105]: E0212 20:23:51.860986 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.861182 kubelet[3105]: W0212 20:23:51.861154 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.861332 kubelet[3105]: E0212 20:23:51.861311 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.885625 kubelet[3105]: E0212 20:23:51.882132 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.885625 kubelet[3105]: W0212 20:23:51.882174 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.885625 kubelet[3105]: E0212 20:23:51.882210 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.898794 kubelet[3105]: E0212 20:23:51.898311 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.898794 kubelet[3105]: W0212 20:23:51.898342 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.898794 kubelet[3105]: E0212 20:23:51.898378 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.900397 kubelet[3105]: E0212 20:23:51.900343 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.900644 kubelet[3105]: W0212 20:23:51.900608 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.900849 kubelet[3105]: E0212 20:23:51.900822 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.908632 kubelet[3105]: E0212 20:23:51.908594 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.908874 kubelet[3105]: W0212 20:23:51.908842 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.909063 kubelet[3105]: E0212 20:23:51.909035 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.924856 kubelet[3105]: E0212 20:23:51.924817 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.925076 kubelet[3105]: W0212 20:23:51.925046 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.925489 kubelet[3105]: E0212 20:23:51.925464 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.931268 kubelet[3105]: E0212 20:23:51.931229 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.931534 kubelet[3105]: W0212 20:23:51.931497 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.931734 kubelet[3105]: E0212 20:23:51.931707 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.932470 kubelet[3105]: E0212 20:23:51.932429 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.934756 kubelet[3105]: W0212 20:23:51.934688 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.935017 kubelet[3105]: E0212 20:23:51.934992 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.945226 kubelet[3105]: E0212 20:23:51.945186 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.945474 kubelet[3105]: W0212 20:23:51.945437 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.945723 kubelet[3105]: E0212 20:23:51.945695 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.951465 kubelet[3105]: E0212 20:23:51.951430 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.955840 kubelet[3105]: W0212 20:23:51.955779 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.956053 kubelet[3105]: E0212 20:23:51.956029 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.956717 kubelet[3105]: E0212 20:23:51.956679 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.956925 kubelet[3105]: W0212 20:23:51.956893 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.957095 kubelet[3105]: E0212 20:23:51.957072 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.957747 kubelet[3105]: E0212 20:23:51.957712 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.957961 kubelet[3105]: W0212 20:23:51.957927 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.958225 kubelet[3105]: E0212 20:23:51.958187 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.962767 kubelet[3105]: E0212 20:23:51.962726 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.963013 kubelet[3105]: W0212 20:23:51.962974 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.963253 kubelet[3105]: E0212 20:23:51.963208 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.963510 kubelet[3105]: I0212 20:23:51.963470 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/8c257656-1c37-42c0-80d9-a7f2f0b7582d-kubelet-dir\") pod \"csi-node-driver-tpmmd\" (UID: \"8c257656-1c37-42c0-80d9-a7f2f0b7582d\") " pod="calico-system/csi-node-driver-tpmmd" Feb 12 20:23:51.964228 kubelet[3105]: E0212 20:23:51.964191 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.964468 kubelet[3105]: W0212 20:23:51.964433 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.964672 kubelet[3105]: E0212 20:23:51.964647 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.964888 kubelet[3105]: I0212 20:23:51.964846 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/8c257656-1c37-42c0-80d9-a7f2f0b7582d-varrun\") pod \"csi-node-driver-tpmmd\" (UID: \"8c257656-1c37-42c0-80d9-a7f2f0b7582d\") " pod="calico-system/csi-node-driver-tpmmd" Feb 12 20:23:51.966917 kubelet[3105]: E0212 20:23:51.966876 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.967157 kubelet[3105]: W0212 20:23:51.967118 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.967782 kubelet[3105]: E0212 20:23:51.967725 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.967994 kubelet[3105]: I0212 20:23:51.967810 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/8c257656-1c37-42c0-80d9-a7f2f0b7582d-socket-dir\") pod \"csi-node-driver-tpmmd\" (UID: \"8c257656-1c37-42c0-80d9-a7f2f0b7582d\") " pod="calico-system/csi-node-driver-tpmmd" Feb 12 20:23:51.970769 kubelet[3105]: E0212 20:23:51.970728 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.971035 kubelet[3105]: W0212 20:23:51.970998 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.972594 kubelet[3105]: E0212 20:23:51.972403 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.972919 kubelet[3105]: E0212 20:23:51.972891 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.973072 kubelet[3105]: W0212 20:23:51.973043 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.973300 kubelet[3105]: E0212 20:23:51.973264 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.976202 kubelet[3105]: E0212 20:23:51.976163 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.976434 kubelet[3105]: W0212 20:23:51.976397 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.977123 kubelet[3105]: E0212 20:23:51.977088 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.977355 kubelet[3105]: W0212 20:23:51.977316 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.977605 kubelet[3105]: E0212 20:23:51.977571 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.978352 kubelet[3105]: E0212 20:23:51.978290 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.978352 kubelet[3105]: E0212 20:23:51.978300 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.978621 kubelet[3105]: W0212 20:23:51.978404 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.978621 kubelet[3105]: E0212 20:23:51.978437 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.978959 kubelet[3105]: E0212 20:23:51.978899 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.978959 kubelet[3105]: W0212 20:23:51.978937 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.979177 kubelet[3105]: E0212 20:23:51.978970 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.979355 kubelet[3105]: E0212 20:23:51.979308 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.979355 kubelet[3105]: W0212 20:23:51.979338 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.979486 kubelet[3105]: E0212 20:23:51.979368 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.980903 kubelet[3105]: E0212 20:23:51.980844 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.980903 kubelet[3105]: W0212 20:23:51.980888 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.981218 kubelet[3105]: E0212 20:23:51.980939 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.981323 kubelet[3105]: E0212 20:23:51.981282 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.981323 kubelet[3105]: W0212 20:23:51.981315 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.981465 kubelet[3105]: E0212 20:23:51.981346 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.983787 kubelet[3105]: E0212 20:23:51.983729 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.983787 kubelet[3105]: W0212 20:23:51.983772 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.984052 kubelet[3105]: E0212 20:23:51.983827 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.989629 kubelet[3105]: E0212 20:23:51.988815 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.989629 kubelet[3105]: W0212 20:23:51.988857 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.989629 kubelet[3105]: E0212 20:23:51.988898 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:51.992859 kubelet[3105]: E0212 20:23:51.992805 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:51.992859 kubelet[3105]: W0212 20:23:51.992847 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:51.993120 kubelet[3105]: E0212 20:23:51.992887 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.090951 kubelet[3105]: E0212 20:23:52.090887 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.090951 kubelet[3105]: W0212 20:23:52.090932 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.091201 kubelet[3105]: E0212 20:23:52.090971 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.095766 kubelet[3105]: E0212 20:23:52.095699 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.095766 kubelet[3105]: W0212 20:23:52.095753 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.096067 kubelet[3105]: E0212 20:23:52.095812 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.100982 kubelet[3105]: E0212 20:23:52.096836 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.100982 kubelet[3105]: W0212 20:23:52.096872 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.100982 kubelet[3105]: E0212 20:23:52.096910 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.100982 kubelet[3105]: I0212 20:23:52.097075 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/8c257656-1c37-42c0-80d9-a7f2f0b7582d-registration-dir\") pod \"csi-node-driver-tpmmd\" (UID: \"8c257656-1c37-42c0-80d9-a7f2f0b7582d\") " pod="calico-system/csi-node-driver-tpmmd" Feb 12 20:23:52.101776 kubelet[3105]: E0212 20:23:52.101444 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.101776 kubelet[3105]: W0212 20:23:52.101477 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.101776 kubelet[3105]: E0212 20:23:52.101524 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.102594 kubelet[3105]: E0212 20:23:52.102204 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.102594 kubelet[3105]: W0212 20:23:52.102235 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.102594 kubelet[3105]: E0212 20:23:52.102353 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.103285 kubelet[3105]: E0212 20:23:52.103015 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.103285 kubelet[3105]: W0212 20:23:52.103044 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.103285 kubelet[3105]: E0212 20:23:52.103086 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.103963 kubelet[3105]: E0212 20:23:52.103726 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.103963 kubelet[3105]: W0212 20:23:52.103754 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.103963 kubelet[3105]: E0212 20:23:52.103824 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.104660 kubelet[3105]: E0212 20:23:52.104375 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.104660 kubelet[3105]: W0212 20:23:52.104404 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.104660 kubelet[3105]: E0212 20:23:52.104457 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.105371 kubelet[3105]: E0212 20:23:52.105338 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.105642 kubelet[3105]: W0212 20:23:52.105607 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.105898 kubelet[3105]: E0212 20:23:52.105868 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.106922 kubelet[3105]: E0212 20:23:52.106884 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.107182 kubelet[3105]: W0212 20:23:52.107130 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.107412 kubelet[3105]: E0212 20:23:52.107385 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.108917 kubelet[3105]: I0212 20:23:52.108879 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5hcc\" (UniqueName: \"kubernetes.io/projected/8c257656-1c37-42c0-80d9-a7f2f0b7582d-kube-api-access-h5hcc\") pod \"csi-node-driver-tpmmd\" (UID: \"8c257656-1c37-42c0-80d9-a7f2f0b7582d\") " pod="calico-system/csi-node-driver-tpmmd" Feb 12 20:23:52.110893 kubelet[3105]: E0212 20:23:52.110857 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.111196 kubelet[3105]: W0212 20:23:52.111136 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.111434 kubelet[3105]: E0212 20:23:52.111392 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.115436 kubelet[3105]: E0212 20:23:52.115374 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.115812 kubelet[3105]: W0212 20:23:52.115761 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.119499 kubelet[3105]: E0212 20:23:52.119440 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.122468 kubelet[3105]: E0212 20:23:52.122434 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.122815 kubelet[3105]: W0212 20:23:52.122780 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.122997 kubelet[3105]: E0212 20:23:52.122975 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.124228 kubelet[3105]: E0212 20:23:52.124193 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.124459 kubelet[3105]: W0212 20:23:52.124429 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.124658 kubelet[3105]: E0212 20:23:52.124633 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.129735 env[1824]: time="2024-02-12T20:23:52.129677083Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cddcbdd45-8pckb,Uid:1328708f-765f-4deb-b16f-b1c21062f50b,Namespace:calico-system,Attempt:0,} returns sandbox id \"ef69d7e0142d17323e8b328b4b122a268d8b2b96786b326afbb0d9e1bb9f3d17\"" Feb 12 20:23:52.135534 kubelet[3105]: E0212 20:23:52.127125 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.135859 kubelet[3105]: W0212 20:23:52.135823 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.136107 kubelet[3105]: E0212 20:23:52.136080 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.138744 kubelet[3105]: E0212 20:23:52.138703 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.138965 kubelet[3105]: W0212 20:23:52.138929 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.139119 kubelet[3105]: E0212 20:23:52.139094 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.139811 kubelet[3105]: E0212 20:23:52.139773 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.140016 kubelet[3105]: W0212 20:23:52.139982 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.140163 kubelet[3105]: E0212 20:23:52.140139 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.140749 kubelet[3105]: E0212 20:23:52.140716 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.140993 kubelet[3105]: W0212 20:23:52.140958 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.141143 kubelet[3105]: E0212 20:23:52.141117 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.141972 kubelet[3105]: E0212 20:23:52.141836 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.142233 kubelet[3105]: W0212 20:23:52.142198 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.142579 kubelet[3105]: E0212 20:23:52.142508 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.145405 kubelet[3105]: E0212 20:23:52.144037 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.145800 kubelet[3105]: W0212 20:23:52.145739 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.146034 kubelet[3105]: E0212 20:23:52.145989 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.146873 kubelet[3105]: E0212 20:23:52.146825 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.147169 kubelet[3105]: W0212 20:23:52.147131 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.147625 kubelet[3105]: E0212 20:23:52.147519 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.157367 env[1824]: time="2024-02-12T20:23:52.157037554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\"" Feb 12 20:23:52.211019 env[1824]: time="2024-02-12T20:23:52.210497538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-57b9r,Uid:e4161c71-2662-43e6-b0d8-4931a4e717e0,Namespace:calico-system,Attempt:0,}" Feb 12 20:23:52.237782 kubelet[3105]: E0212 20:23:52.237275 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.237782 kubelet[3105]: W0212 20:23:52.237303 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.237782 kubelet[3105]: E0212 20:23:52.237337 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.239041 kubelet[3105]: E0212 20:23:52.238308 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.239041 kubelet[3105]: W0212 20:23:52.238352 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.239832 kubelet[3105]: E0212 20:23:52.239368 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.241690 kubelet[3105]: E0212 20:23:52.240041 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.241690 kubelet[3105]: W0212 20:23:52.240071 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.241690 kubelet[3105]: E0212 20:23:52.241350 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.242454 kubelet[3105]: E0212 20:23:52.242138 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.242454 kubelet[3105]: W0212 20:23:52.242172 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.242454 kubelet[3105]: E0212 20:23:52.242218 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.243115 kubelet[3105]: E0212 20:23:52.243082 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.243452 kubelet[3105]: W0212 20:23:52.243285 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.243786 kubelet[3105]: E0212 20:23:52.243739 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.244245 kubelet[3105]: E0212 20:23:52.244219 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.244395 kubelet[3105]: W0212 20:23:52.244366 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.244528 kubelet[3105]: E0212 20:23:52.244507 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.245041 kubelet[3105]: E0212 20:23:52.245018 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.245213 kubelet[3105]: W0212 20:23:52.245186 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.245346 kubelet[3105]: E0212 20:23:52.245325 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.245902 kubelet[3105]: E0212 20:23:52.245875 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.246064 kubelet[3105]: W0212 20:23:52.246035 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.246200 kubelet[3105]: E0212 20:23:52.246179 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.246873 kubelet[3105]: E0212 20:23:52.246841 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.247090 kubelet[3105]: W0212 20:23:52.247060 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.247223 kubelet[3105]: E0212 20:23:52.247203 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.248175 kubelet[3105]: E0212 20:23:52.248136 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.248651 kubelet[3105]: W0212 20:23:52.248615 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.248848 kubelet[3105]: E0212 20:23:52.248824 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.264251 env[1824]: time="2024-02-12T20:23:52.262716519Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:23:52.264251 env[1824]: time="2024-02-12T20:23:52.262792295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:23:52.264251 env[1824]: time="2024-02-12T20:23:52.262818502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:23:52.264251 env[1824]: time="2024-02-12T20:23:52.263172785Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/264b1c530a0a57d9ef1a99372fbb39213e391ba8b2449ed80b801b7e90cd3ca9 pid=3633 runtime=io.containerd.runc.v2 Feb 12 20:23:52.287770 kubelet[3105]: E0212 20:23:52.287722 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:52.294706 kubelet[3105]: W0212 20:23:52.293371 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:52.294888 kubelet[3105]: E0212 20:23:52.294717 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:52.421942 env[1824]: time="2024-02-12T20:23:52.420047623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-57b9r,Uid:e4161c71-2662-43e6-b0d8-4931a4e717e0,Namespace:calico-system,Attempt:0,} returns sandbox id \"264b1c530a0a57d9ef1a99372fbb39213e391ba8b2449ed80b801b7e90cd3ca9\"" Feb 12 20:23:52.941000 audit[3704]: NETFILTER_CFG table=filter:107 family=2 entries=14 op=nft_register_rule pid=3704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:52.941000 audit[3704]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4732 a0=3 a1=ffffc53bad20 a2=0 a3=ffff8a1a76c0 items=0 ppid=3263 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:52.941000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:52.946000 audit[3704]: NETFILTER_CFG table=nat:108 family=2 entries=20 op=nft_register_rule pid=3704 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:23:52.946000 audit[3704]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffc53bad20 a2=0 a3=ffff8a1a76c0 items=0 ppid=3263 pid=3704 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:23:52.946000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:23:53.375806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount757879203.mount: Deactivated successfully. Feb 12 20:23:54.005050 kubelet[3105]: E0212 20:23:54.003586 3105 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpmmd" podUID=8c257656-1c37-42c0-80d9-a7f2f0b7582d Feb 12 20:23:54.757236 env[1824]: time="2024-02-12T20:23:54.757181066Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:54.760459 env[1824]: time="2024-02-12T20:23:54.760405582Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:54.763366 env[1824]: time="2024-02-12T20:23:54.763294017Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:54.768436 env[1824]: time="2024-02-12T20:23:54.768365356Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:5f2d3b8c354a4eb6de46e786889913916e620c6c256982fb8d0f1a1d36a282bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:54.769467 env[1824]: time="2024-02-12T20:23:54.769390901Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.27.0\" returns image reference \"sha256:fba96c9caf161e105c76b559b06b4b2337b89b54833d69984209161d93145969\"" Feb 12 20:23:54.785145 env[1824]: time="2024-02-12T20:23:54.785084571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\"" Feb 12 20:23:54.814325 env[1824]: time="2024-02-12T20:23:54.814166086Z" level=info msg="CreateContainer within sandbox \"ef69d7e0142d17323e8b328b4b122a268d8b2b96786b326afbb0d9e1bb9f3d17\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 12 20:23:54.840150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount98189029.mount: Deactivated successfully. Feb 12 20:23:54.853807 env[1824]: time="2024-02-12T20:23:54.853739006Z" level=info msg="CreateContainer within sandbox \"ef69d7e0142d17323e8b328b4b122a268d8b2b96786b326afbb0d9e1bb9f3d17\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"be4d05f34f0790927c8e316dbaf84fbde577143c1ebf192b5a1eb3a8e5ab80ed\"" Feb 12 20:23:54.855155 env[1824]: time="2024-02-12T20:23:54.855104711Z" level=info msg="StartContainer for \"be4d05f34f0790927c8e316dbaf84fbde577143c1ebf192b5a1eb3a8e5ab80ed\"" Feb 12 20:23:55.044725 env[1824]: time="2024-02-12T20:23:55.044439715Z" level=info msg="StartContainer for \"be4d05f34f0790927c8e316dbaf84fbde577143c1ebf192b5a1eb3a8e5ab80ed\" returns successfully" Feb 12 20:23:55.225064 kubelet[3105]: E0212 20:23:55.225012 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.225064 kubelet[3105]: W0212 20:23:55.225051 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.225798 kubelet[3105]: E0212 20:23:55.225110 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.226230 kubelet[3105]: E0212 20:23:55.226188 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.226332 kubelet[3105]: W0212 20:23:55.226224 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.226332 kubelet[3105]: E0212 20:23:55.226283 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.226793 kubelet[3105]: E0212 20:23:55.226751 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.226793 kubelet[3105]: W0212 20:23:55.226781 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.226956 kubelet[3105]: E0212 20:23:55.226810 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.227358 kubelet[3105]: E0212 20:23:55.227317 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.227358 kubelet[3105]: W0212 20:23:55.227349 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.227523 kubelet[3105]: E0212 20:23:55.227380 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.227965 kubelet[3105]: E0212 20:23:55.227916 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.227965 kubelet[3105]: W0212 20:23:55.227959 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.228141 kubelet[3105]: E0212 20:23:55.227990 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.228492 kubelet[3105]: E0212 20:23:55.228456 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.228492 kubelet[3105]: W0212 20:23:55.228486 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.228664 kubelet[3105]: E0212 20:23:55.228521 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.229085 kubelet[3105]: E0212 20:23:55.229036 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.229085 kubelet[3105]: W0212 20:23:55.229079 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.229259 kubelet[3105]: E0212 20:23:55.229110 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.230677 kubelet[3105]: E0212 20:23:55.229719 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.230677 kubelet[3105]: W0212 20:23:55.230120 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.230677 kubelet[3105]: E0212 20:23:55.230179 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.234172 kubelet[3105]: E0212 20:23:55.231851 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.234172 kubelet[3105]: W0212 20:23:55.231894 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.234172 kubelet[3105]: E0212 20:23:55.231938 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.234172 kubelet[3105]: E0212 20:23:55.232394 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.234172 kubelet[3105]: W0212 20:23:55.232414 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.234172 kubelet[3105]: E0212 20:23:55.232444 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.235923 kubelet[3105]: E0212 20:23:55.235866 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.235923 kubelet[3105]: W0212 20:23:55.235903 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.236145 kubelet[3105]: E0212 20:23:55.235942 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.239274 kubelet[3105]: E0212 20:23:55.236378 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.239274 kubelet[3105]: W0212 20:23:55.236407 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.239274 kubelet[3105]: E0212 20:23:55.236441 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.260387 kubelet[3105]: E0212 20:23:55.260056 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.260387 kubelet[3105]: W0212 20:23:55.260089 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.260387 kubelet[3105]: E0212 20:23:55.260132 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.261206 kubelet[3105]: E0212 20:23:55.260881 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.261206 kubelet[3105]: W0212 20:23:55.260915 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.261206 kubelet[3105]: E0212 20:23:55.260969 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.261900 kubelet[3105]: E0212 20:23:55.261685 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.261900 kubelet[3105]: W0212 20:23:55.261717 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.261900 kubelet[3105]: E0212 20:23:55.261774 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.262326 kubelet[3105]: E0212 20:23:55.262276 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.262326 kubelet[3105]: W0212 20:23:55.262317 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.262511 kubelet[3105]: E0212 20:23:55.262369 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.265653 kubelet[3105]: E0212 20:23:55.262827 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.265653 kubelet[3105]: W0212 20:23:55.262866 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.265653 kubelet[3105]: E0212 20:23:55.263283 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.265653 kubelet[3105]: W0212 20:23:55.263313 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.265653 kubelet[3105]: E0212 20:23:55.263726 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.265653 kubelet[3105]: W0212 20:23:55.263756 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.265653 kubelet[3105]: E0212 20:23:55.263790 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.265653 kubelet[3105]: E0212 20:23:55.264115 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.265653 kubelet[3105]: W0212 20:23:55.264135 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.265653 kubelet[3105]: E0212 20:23:55.264162 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.265653 kubelet[3105]: E0212 20:23:55.264487 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.266342 kubelet[3105]: W0212 20:23:55.264509 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.266342 kubelet[3105]: E0212 20:23:55.264560 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.266342 kubelet[3105]: E0212 20:23:55.266064 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.266342 kubelet[3105]: W0212 20:23:55.266095 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.266342 kubelet[3105]: E0212 20:23:55.266129 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.266949 kubelet[3105]: E0212 20:23:55.266696 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.266949 kubelet[3105]: E0212 20:23:55.266743 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.268849 kubelet[3105]: E0212 20:23:55.268796 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.268849 kubelet[3105]: W0212 20:23:55.268837 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.269109 kubelet[3105]: E0212 20:23:55.268890 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.271013 kubelet[3105]: E0212 20:23:55.269268 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.271013 kubelet[3105]: W0212 20:23:55.269297 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.271013 kubelet[3105]: E0212 20:23:55.269337 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.276595 kubelet[3105]: E0212 20:23:55.271715 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.276595 kubelet[3105]: W0212 20:23:55.271754 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.276595 kubelet[3105]: E0212 20:23:55.272147 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.276595 kubelet[3105]: W0212 20:23:55.272169 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.276595 kubelet[3105]: E0212 20:23:55.272200 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.276595 kubelet[3105]: E0212 20:23:55.272516 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.276595 kubelet[3105]: W0212 20:23:55.272536 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.276595 kubelet[3105]: E0212 20:23:55.272649 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.276595 kubelet[3105]: E0212 20:23:55.273168 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.276595 kubelet[3105]: W0212 20:23:55.273195 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.277277 kubelet[3105]: E0212 20:23:55.273228 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.277277 kubelet[3105]: E0212 20:23:55.274375 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.277277 kubelet[3105]: W0212 20:23:55.274404 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.277277 kubelet[3105]: E0212 20:23:55.274440 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.277277 kubelet[3105]: E0212 20:23:55.274489 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:55.277277 kubelet[3105]: E0212 20:23:55.275707 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:55.277277 kubelet[3105]: W0212 20:23:55.275739 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:55.277277 kubelet[3105]: E0212 20:23:55.275776 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.010357 kubelet[3105]: E0212 20:23:56.010303 3105 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpmmd" podUID=8c257656-1c37-42c0-80d9-a7f2f0b7582d Feb 12 20:23:56.132398 kubelet[3105]: I0212 20:23:56.132364 3105 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 20:23:56.142815 kubelet[3105]: E0212 20:23:56.142744 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.143046 kubelet[3105]: W0212 20:23:56.143014 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.143200 kubelet[3105]: E0212 20:23:56.143178 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.144844 kubelet[3105]: E0212 20:23:56.144786 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.145097 kubelet[3105]: W0212 20:23:56.145069 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.145246 kubelet[3105]: E0212 20:23:56.145225 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.158596 kubelet[3105]: E0212 20:23:56.157735 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.158596 kubelet[3105]: W0212 20:23:56.157774 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.158596 kubelet[3105]: E0212 20:23:56.157814 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.158911 kubelet[3105]: E0212 20:23:56.158725 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.158911 kubelet[3105]: W0212 20:23:56.158749 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.158911 kubelet[3105]: E0212 20:23:56.158789 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.159162 kubelet[3105]: E0212 20:23:56.159126 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.159162 kubelet[3105]: W0212 20:23:56.159156 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.159300 kubelet[3105]: E0212 20:23:56.159183 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.159531 kubelet[3105]: E0212 20:23:56.159495 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.159531 kubelet[3105]: W0212 20:23:56.159524 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.159718 kubelet[3105]: E0212 20:23:56.159576 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.160428 kubelet[3105]: E0212 20:23:56.160163 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.160428 kubelet[3105]: W0212 20:23:56.160207 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.160428 kubelet[3105]: E0212 20:23:56.160246 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.160854 kubelet[3105]: E0212 20:23:56.160659 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.160854 kubelet[3105]: W0212 20:23:56.160686 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.160854 kubelet[3105]: E0212 20:23:56.160724 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.168577 kubelet[3105]: E0212 20:23:56.161096 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.168577 kubelet[3105]: W0212 20:23:56.161140 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.168577 kubelet[3105]: E0212 20:23:56.161174 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.168577 kubelet[3105]: E0212 20:23:56.161633 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.168577 kubelet[3105]: W0212 20:23:56.161663 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.168577 kubelet[3105]: E0212 20:23:56.161696 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.168577 kubelet[3105]: E0212 20:23:56.162101 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.168577 kubelet[3105]: W0212 20:23:56.162128 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.168577 kubelet[3105]: E0212 20:23:56.162163 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.168577 kubelet[3105]: E0212 20:23:56.162602 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.169234 kubelet[3105]: W0212 20:23:56.162629 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.169234 kubelet[3105]: E0212 20:23:56.162663 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.172136 kubelet[3105]: E0212 20:23:56.171808 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.173432 kubelet[3105]: W0212 20:23:56.172980 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.174755 kubelet[3105]: E0212 20:23:56.174714 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.177030 kubelet[3105]: E0212 20:23:56.176048 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.177030 kubelet[3105]: W0212 20:23:56.176091 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.177030 kubelet[3105]: E0212 20:23:56.176127 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.177030 kubelet[3105]: E0212 20:23:56.176473 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.177030 kubelet[3105]: W0212 20:23:56.176498 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.177030 kubelet[3105]: E0212 20:23:56.176532 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.178100 kubelet[3105]: E0212 20:23:56.177739 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.178100 kubelet[3105]: W0212 20:23:56.177772 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.178100 kubelet[3105]: E0212 20:23:56.177821 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.178739 kubelet[3105]: E0212 20:23:56.178703 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.178946 kubelet[3105]: W0212 20:23:56.178911 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.179294 kubelet[3105]: E0212 20:23:56.179260 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.180148 kubelet[3105]: E0212 20:23:56.180111 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.180416 kubelet[3105]: W0212 20:23:56.180326 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.180707 kubelet[3105]: E0212 20:23:56.180674 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.181376 kubelet[3105]: E0212 20:23:56.181346 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.182072 kubelet[3105]: W0212 20:23:56.182032 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.182251 kubelet[3105]: E0212 20:23:56.182229 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.182938 kubelet[3105]: E0212 20:23:56.182909 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.183131 kubelet[3105]: W0212 20:23:56.183103 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.183410 kubelet[3105]: E0212 20:23:56.183386 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.184716 kubelet[3105]: E0212 20:23:56.184683 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.186311 kubelet[3105]: W0212 20:23:56.186266 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.186615 kubelet[3105]: E0212 20:23:56.186537 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.187162 kubelet[3105]: E0212 20:23:56.187138 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.188109 kubelet[3105]: W0212 20:23:56.188033 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.188327 kubelet[3105]: E0212 20:23:56.188304 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.188875 kubelet[3105]: E0212 20:23:56.188847 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.189536 kubelet[3105]: W0212 20:23:56.189436 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.189836 kubelet[3105]: E0212 20:23:56.189811 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.192143 kubelet[3105]: E0212 20:23:56.192107 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.192353 kubelet[3105]: W0212 20:23:56.192323 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.192527 kubelet[3105]: E0212 20:23:56.192505 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.193095 kubelet[3105]: E0212 20:23:56.193067 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.193269 kubelet[3105]: W0212 20:23:56.193242 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.193480 kubelet[3105]: E0212 20:23:56.193458 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.198787 kubelet[3105]: E0212 20:23:56.198751 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.198999 kubelet[3105]: W0212 20:23:56.198969 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.199147 kubelet[3105]: E0212 20:23:56.199125 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.199747 kubelet[3105]: E0212 20:23:56.199717 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.199943 kubelet[3105]: W0212 20:23:56.199912 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.200083 kubelet[3105]: E0212 20:23:56.200062 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.201371 kubelet[3105]: E0212 20:23:56.201292 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.201704 kubelet[3105]: W0212 20:23:56.201626 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.201867 kubelet[3105]: E0212 20:23:56.201845 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.202844 kubelet[3105]: E0212 20:23:56.202813 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.203159 kubelet[3105]: W0212 20:23:56.203130 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.203304 kubelet[3105]: E0212 20:23:56.203282 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.205674 kubelet[3105]: E0212 20:23:56.205638 3105 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 12 20:23:56.205933 kubelet[3105]: W0212 20:23:56.205901 3105 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 12 20:23:56.206067 kubelet[3105]: E0212 20:23:56.206045 3105 plugins.go:736] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 12 20:23:56.266672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242058270.mount: Deactivated successfully. Feb 12 20:23:56.478695 env[1824]: time="2024-02-12T20:23:56.478598491Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:56.488586 env[1824]: time="2024-02-12T20:23:56.488497543Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:56.492824 env[1824]: time="2024-02-12T20:23:56.492768385Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:56.497267 env[1824]: time="2024-02-12T20:23:56.497209008Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:b05edbd1f80db4ada229e6001a666a7dd36bb6ab617143684fb3d28abfc4b71e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:23:56.500357 env[1824]: time="2024-02-12T20:23:56.500067943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.27.0\" returns image reference \"sha256:cbddd33ed55a4a5c129e8f09945d426860425b9778d9402efe7bcefea7990a57\"" Feb 12 20:23:56.527033 env[1824]: time="2024-02-12T20:23:56.526332483Z" level=info msg="CreateContainer within sandbox \"264b1c530a0a57d9ef1a99372fbb39213e391ba8b2449ed80b801b7e90cd3ca9\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 12 20:23:56.556305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240997668.mount: Deactivated successfully. Feb 12 20:23:56.567282 env[1824]: time="2024-02-12T20:23:56.567205882Z" level=info msg="CreateContainer within sandbox \"264b1c530a0a57d9ef1a99372fbb39213e391ba8b2449ed80b801b7e90cd3ca9\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c4c9e92768f93d289c43e88dd34d8be6386a55193d4998d5157b9881f7b7f4ed\"" Feb 12 20:23:56.568518 env[1824]: time="2024-02-12T20:23:56.568464219Z" level=info msg="StartContainer for \"c4c9e92768f93d289c43e88dd34d8be6386a55193d4998d5157b9881f7b7f4ed\"" Feb 12 20:23:56.740416 env[1824]: time="2024-02-12T20:23:56.740351703Z" level=info msg="StartContainer for \"c4c9e92768f93d289c43e88dd34d8be6386a55193d4998d5157b9881f7b7f4ed\" returns successfully" Feb 12 20:23:56.864365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c4c9e92768f93d289c43e88dd34d8be6386a55193d4998d5157b9881f7b7f4ed-rootfs.mount: Deactivated successfully. Feb 12 20:23:57.163715 kubelet[3105]: I0212 20:23:57.163668 3105 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6cddcbdd45-8pckb" podStartSLOduration=-9.223372030691162e+09 pod.CreationTimestamp="2024-02-12 20:23:51 +0000 UTC" firstStartedPulling="2024-02-12 20:23:52.14814637 +0000 UTC m=+24.856826931" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:23:55.161087058 +0000 UTC m=+27.869767643" watchObservedRunningTime="2024-02-12 20:23:57.163613301 +0000 UTC m=+29.872293910" Feb 12 20:23:57.351854 env[1824]: time="2024-02-12T20:23:57.351761997Z" level=info msg="shim disconnected" id=c4c9e92768f93d289c43e88dd34d8be6386a55193d4998d5157b9881f7b7f4ed Feb 12 20:23:57.352141 env[1824]: time="2024-02-12T20:23:57.351858185Z" level=warning msg="cleaning up after shim disconnected" id=c4c9e92768f93d289c43e88dd34d8be6386a55193d4998d5157b9881f7b7f4ed namespace=k8s.io Feb 12 20:23:57.352141 env[1824]: time="2024-02-12T20:23:57.351883515Z" level=info msg="cleaning up dead shim" Feb 12 20:23:57.365907 env[1824]: time="2024-02-12T20:23:57.365834664Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:23:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3864 runtime=io.containerd.runc.v2\n" Feb 12 20:23:58.004187 kubelet[3105]: E0212 20:23:58.004146 3105 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpmmd" podUID=8c257656-1c37-42c0-80d9-a7f2f0b7582d Feb 12 20:23:58.144695 env[1824]: time="2024-02-12T20:23:58.144639157Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\"" Feb 12 20:23:59.478257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1700246705.mount: Deactivated successfully. Feb 12 20:24:00.005640 kubelet[3105]: E0212 20:24:00.004231 3105 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpmmd" podUID=8c257656-1c37-42c0-80d9-a7f2f0b7582d Feb 12 20:24:02.005391 kubelet[3105]: E0212 20:24:02.005343 3105 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpmmd" podUID=8c257656-1c37-42c0-80d9-a7f2f0b7582d Feb 12 20:24:03.253060 env[1824]: time="2024-02-12T20:24:03.252958228Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:03.258102 env[1824]: time="2024-02-12T20:24:03.258034104Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:03.261685 env[1824]: time="2024-02-12T20:24:03.261624506Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:03.265055 env[1824]: time="2024-02-12T20:24:03.264979752Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:d943b4c23e82a39b0186a1a3b2fe8f728e543d503df72d7be521501a82b7e7b4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:03.266302 env[1824]: time="2024-02-12T20:24:03.266234991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.27.0\" returns image reference \"sha256:9c9318f5fbf505fc3d84676966009a3887e58ea1e3eac10039e5a96dfceb254b\"" Feb 12 20:24:03.279332 env[1824]: time="2024-02-12T20:24:03.279252899Z" level=info msg="CreateContainer within sandbox \"264b1c530a0a57d9ef1a99372fbb39213e391ba8b2449ed80b801b7e90cd3ca9\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 12 20:24:03.305721 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3307066464.mount: Deactivated successfully. Feb 12 20:24:03.312456 env[1824]: time="2024-02-12T20:24:03.312393155Z" level=info msg="CreateContainer within sandbox \"264b1c530a0a57d9ef1a99372fbb39213e391ba8b2449ed80b801b7e90cd3ca9\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"21d52092d742d4595680826e1b9f30e0b671b79b7d097f1b1389a5b8018636c9\"" Feb 12 20:24:03.315755 env[1824]: time="2024-02-12T20:24:03.313684752Z" level=info msg="StartContainer for \"21d52092d742d4595680826e1b9f30e0b671b79b7d097f1b1389a5b8018636c9\"" Feb 12 20:24:03.445934 env[1824]: time="2024-02-12T20:24:03.443435127Z" level=info msg="StartContainer for \"21d52092d742d4595680826e1b9f30e0b671b79b7d097f1b1389a5b8018636c9\" returns successfully" Feb 12 20:24:04.002934 kubelet[3105]: E0212 20:24:04.002897 3105 pod_workers.go:965] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-tpmmd" podUID=8c257656-1c37-42c0-80d9-a7f2f0b7582d Feb 12 20:24:04.454627 env[1824]: time="2024-02-12T20:24:04.454492012Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 12 20:24:04.492452 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-21d52092d742d4595680826e1b9f30e0b671b79b7d097f1b1389a5b8018636c9-rootfs.mount: Deactivated successfully. Feb 12 20:24:04.505309 kubelet[3105]: I0212 20:24:04.505247 3105 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 12 20:24:04.546201 kubelet[3105]: I0212 20:24:04.546145 3105 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:04.561671 kubelet[3105]: I0212 20:24:04.561595 3105 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:04.568680 kubelet[3105]: I0212 20:24:04.568620 3105 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:24:04.651490 kubelet[3105]: I0212 20:24:04.651454 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxsq4\" (UniqueName: \"kubernetes.io/projected/00bc6aea-403c-4dfd-a949-09ac35a4157e-kube-api-access-vxsq4\") pod \"calico-kube-controllers-69855554b9-7fn7j\" (UID: \"00bc6aea-403c-4dfd-a949-09ac35a4157e\") " pod="calico-system/calico-kube-controllers-69855554b9-7fn7j" Feb 12 20:24:04.651806 kubelet[3105]: I0212 20:24:04.651782 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/00bc6aea-403c-4dfd-a949-09ac35a4157e-tigera-ca-bundle\") pod \"calico-kube-controllers-69855554b9-7fn7j\" (UID: \"00bc6aea-403c-4dfd-a949-09ac35a4157e\") " pod="calico-system/calico-kube-controllers-69855554b9-7fn7j" Feb 12 20:24:04.651994 kubelet[3105]: I0212 20:24:04.651972 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8zwp\" (UniqueName: \"kubernetes.io/projected/2cbee1c0-7ffa-4607-96e5-c1d08a403936-kube-api-access-x8zwp\") pod \"coredns-787d4945fb-hzrlr\" (UID: \"2cbee1c0-7ffa-4607-96e5-c1d08a403936\") " pod="kube-system/coredns-787d4945fb-hzrlr" Feb 12 20:24:04.652191 kubelet[3105]: I0212 20:24:04.652170 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2cbee1c0-7ffa-4607-96e5-c1d08a403936-config-volume\") pod \"coredns-787d4945fb-hzrlr\" (UID: \"2cbee1c0-7ffa-4607-96e5-c1d08a403936\") " pod="kube-system/coredns-787d4945fb-hzrlr" Feb 12 20:24:04.652383 kubelet[3105]: I0212 20:24:04.652330 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/465dca2b-3292-462e-bbd5-a3a7982cda7e-config-volume\") pod \"coredns-787d4945fb-r94bf\" (UID: \"465dca2b-3292-462e-bbd5-a3a7982cda7e\") " pod="kube-system/coredns-787d4945fb-r94bf" Feb 12 20:24:04.652633 kubelet[3105]: I0212 20:24:04.652610 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r697n\" (UniqueName: \"kubernetes.io/projected/465dca2b-3292-462e-bbd5-a3a7982cda7e-kube-api-access-r697n\") pod \"coredns-787d4945fb-r94bf\" (UID: \"465dca2b-3292-462e-bbd5-a3a7982cda7e\") " pod="kube-system/coredns-787d4945fb-r94bf" Feb 12 20:24:04.858289 env[1824]: time="2024-02-12T20:24:04.858081071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hzrlr,Uid:2cbee1c0-7ffa-4607-96e5-c1d08a403936,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:04.881356 env[1824]: time="2024-02-12T20:24:04.881300815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69855554b9-7fn7j,Uid:00bc6aea-403c-4dfd-a949-09ac35a4157e,Namespace:calico-system,Attempt:0,}" Feb 12 20:24:04.910071 env[1824]: time="2024-02-12T20:24:04.909999698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r94bf,Uid:465dca2b-3292-462e-bbd5-a3a7982cda7e,Namespace:kube-system,Attempt:0,}" Feb 12 20:24:05.643222 env[1824]: time="2024-02-12T20:24:05.643119872Z" level=info msg="shim disconnected" id=21d52092d742d4595680826e1b9f30e0b671b79b7d097f1b1389a5b8018636c9 Feb 12 20:24:05.643996 env[1824]: time="2024-02-12T20:24:05.643243095Z" level=warning msg="cleaning up after shim disconnected" id=21d52092d742d4595680826e1b9f30e0b671b79b7d097f1b1389a5b8018636c9 namespace=k8s.io Feb 12 20:24:05.643996 env[1824]: time="2024-02-12T20:24:05.643266891Z" level=info msg="cleaning up dead shim" Feb 12 20:24:05.715789 env[1824]: time="2024-02-12T20:24:05.715732865Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:24:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3935 runtime=io.containerd.runc.v2\n" Feb 12 20:24:05.830790 env[1824]: time="2024-02-12T20:24:05.830699912Z" level=error msg="Failed to destroy network for sandbox \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:05.835060 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7-shm.mount: Deactivated successfully. Feb 12 20:24:05.837558 env[1824]: time="2024-02-12T20:24:05.837469419Z" level=error msg="encountered an error cleaning up failed sandbox \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:05.837826 env[1824]: time="2024-02-12T20:24:05.837774029Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r94bf,Uid:465dca2b-3292-462e-bbd5-a3a7982cda7e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:05.842143 kubelet[3105]: E0212 20:24:05.841447 3105 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:05.842143 kubelet[3105]: E0212 20:24:05.841589 3105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-r94bf" Feb 12 20:24:05.842143 kubelet[3105]: E0212 20:24:05.841629 3105 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-r94bf" Feb 12 20:24:05.842977 kubelet[3105]: E0212 20:24:05.842073 3105 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-r94bf_kube-system(465dca2b-3292-462e-bbd5-a3a7982cda7e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-r94bf_kube-system(465dca2b-3292-462e-bbd5-a3a7982cda7e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-r94bf" podUID=465dca2b-3292-462e-bbd5-a3a7982cda7e Feb 12 20:24:05.854866 env[1824]: time="2024-02-12T20:24:05.854794171Z" level=error msg="Failed to destroy network for sandbox \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:05.855749 env[1824]: time="2024-02-12T20:24:05.855690912Z" level=error msg="encountered an error cleaning up failed sandbox \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:05.855984 env[1824]: time="2024-02-12T20:24:05.855932092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69855554b9-7fn7j,Uid:00bc6aea-403c-4dfd-a949-09ac35a4157e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:05.857150 kubelet[3105]: E0212 20:24:05.856458 3105 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:05.857150 kubelet[3105]: E0212 20:24:05.856595 3105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69855554b9-7fn7j" Feb 12 20:24:05.857150 kubelet[3105]: E0212 20:24:05.856659 3105 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-69855554b9-7fn7j" Feb 12 20:24:05.857485 kubelet[3105]: E0212 20:24:05.857105 3105 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-69855554b9-7fn7j_calico-system(00bc6aea-403c-4dfd-a949-09ac35a4157e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-69855554b9-7fn7j_calico-system(00bc6aea-403c-4dfd-a949-09ac35a4157e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69855554b9-7fn7j" podUID=00bc6aea-403c-4dfd-a949-09ac35a4157e Feb 12 20:24:05.863475 env[1824]: time="2024-02-12T20:24:05.863401307Z" level=error msg="Failed to destroy network for sandbox \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:05.864328 env[1824]: time="2024-02-12T20:24:05.864272789Z" level=error msg="encountered an error cleaning up failed sandbox \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:05.864576 env[1824]: time="2024-02-12T20:24:05.864505353Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hzrlr,Uid:2cbee1c0-7ffa-4607-96e5-c1d08a403936,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:05.865758 kubelet[3105]: E0212 20:24:05.864999 3105 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:05.865758 kubelet[3105]: E0212 20:24:05.865095 3105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-hzrlr" Feb 12 20:24:05.865758 kubelet[3105]: E0212 20:24:05.865161 3105 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-787d4945fb-hzrlr" Feb 12 20:24:05.866075 kubelet[3105]: E0212 20:24:05.865689 3105 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-hzrlr_kube-system(2cbee1c0-7ffa-4607-96e5-c1d08a403936)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-hzrlr_kube-system(2cbee1c0-7ffa-4607-96e5-c1d08a403936)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-hzrlr" podUID=2cbee1c0-7ffa-4607-96e5-c1d08a403936 Feb 12 20:24:06.008528 env[1824]: time="2024-02-12T20:24:06.008453138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpmmd,Uid:8c257656-1c37-42c0-80d9-a7f2f0b7582d,Namespace:calico-system,Attempt:0,}" Feb 12 20:24:06.105072 env[1824]: time="2024-02-12T20:24:06.104997675Z" level=error msg="Failed to destroy network for sandbox \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:06.106021 env[1824]: time="2024-02-12T20:24:06.105959227Z" level=error msg="encountered an error cleaning up failed sandbox \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:06.106473 env[1824]: time="2024-02-12T20:24:06.106407892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpmmd,Uid:8c257656-1c37-42c0-80d9-a7f2f0b7582d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:06.106954 kubelet[3105]: E0212 20:24:06.106906 3105 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:06.107118 kubelet[3105]: E0212 20:24:06.106996 3105 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tpmmd" Feb 12 20:24:06.107118 kubelet[3105]: E0212 20:24:06.107036 3105 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-tpmmd" Feb 12 20:24:06.107353 kubelet[3105]: E0212 20:24:06.107130 3105 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-tpmmd_calico-system(8c257656-1c37-42c0-80d9-a7f2f0b7582d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-tpmmd_calico-system(8c257656-1c37-42c0-80d9-a7f2f0b7582d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tpmmd" podUID=8c257656-1c37-42c0-80d9-a7f2f0b7582d Feb 12 20:24:06.170293 kubelet[3105]: I0212 20:24:06.166274 3105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:06.170749 env[1824]: time="2024-02-12T20:24:06.167532029Z" level=info msg="StopPodSandbox for \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\"" Feb 12 20:24:06.177013 kubelet[3105]: I0212 20:24:06.174181 3105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:06.177203 env[1824]: time="2024-02-12T20:24:06.175244074Z" level=info msg="StopPodSandbox for \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\"" Feb 12 20:24:06.183081 kubelet[3105]: I0212 20:24:06.183047 3105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:06.187596 env[1824]: time="2024-02-12T20:24:06.187325581Z" level=info msg="StopPodSandbox for \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\"" Feb 12 20:24:06.230576 env[1824]: time="2024-02-12T20:24:06.226579132Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\"" Feb 12 20:24:06.248762 kubelet[3105]: I0212 20:24:06.248012 3105 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:06.251116 env[1824]: time="2024-02-12T20:24:06.249360320Z" level=info msg="StopPodSandbox for \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\"" Feb 12 20:24:06.327380 env[1824]: time="2024-02-12T20:24:06.327177361Z" level=error msg="StopPodSandbox for \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\" failed" error="failed to destroy network for sandbox \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:06.328584 kubelet[3105]: E0212 20:24:06.328030 3105 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:06.328584 kubelet[3105]: E0212 20:24:06.328200 3105 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a} Feb 12 20:24:06.328584 kubelet[3105]: E0212 20:24:06.328452 3105 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"8c257656-1c37-42c0-80d9-a7f2f0b7582d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 20:24:06.328584 kubelet[3105]: E0212 20:24:06.328525 3105 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"8c257656-1c37-42c0-80d9-a7f2f0b7582d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-tpmmd" podUID=8c257656-1c37-42c0-80d9-a7f2f0b7582d Feb 12 20:24:06.348663 env[1824]: time="2024-02-12T20:24:06.348510989Z" level=error msg="StopPodSandbox for \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\" failed" error="failed to destroy network for sandbox \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:06.348960 kubelet[3105]: E0212 20:24:06.348916 3105 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:06.349079 kubelet[3105]: E0212 20:24:06.348983 3105 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e} Feb 12 20:24:06.349079 kubelet[3105]: E0212 20:24:06.349052 3105 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"00bc6aea-403c-4dfd-a949-09ac35a4157e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 20:24:06.349294 kubelet[3105]: E0212 20:24:06.349203 3105 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"00bc6aea-403c-4dfd-a949-09ac35a4157e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-69855554b9-7fn7j" podUID=00bc6aea-403c-4dfd-a949-09ac35a4157e Feb 12 20:24:06.352460 env[1824]: time="2024-02-12T20:24:06.352334649Z" level=error msg="StopPodSandbox for \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\" failed" error="failed to destroy network for sandbox \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:06.353089 kubelet[3105]: E0212 20:24:06.352872 3105 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:06.353089 kubelet[3105]: E0212 20:24:06.352932 3105 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f} Feb 12 20:24:06.353089 kubelet[3105]: E0212 20:24:06.352992 3105 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"2cbee1c0-7ffa-4607-96e5-c1d08a403936\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 20:24:06.353089 kubelet[3105]: E0212 20:24:06.353048 3105 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"2cbee1c0-7ffa-4607-96e5-c1d08a403936\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-hzrlr" podUID=2cbee1c0-7ffa-4607-96e5-c1d08a403936 Feb 12 20:24:06.370205 env[1824]: time="2024-02-12T20:24:06.370102165Z" level=error msg="StopPodSandbox for \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\" failed" error="failed to destroy network for sandbox \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 12 20:24:06.370494 kubelet[3105]: E0212 20:24:06.370433 3105 remote_runtime.go:205] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:06.370494 kubelet[3105]: E0212 20:24:06.370505 3105 kuberuntime_manager.go:965] "Failed to stop sandbox" podSandboxID={Type:containerd ID:f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7} Feb 12 20:24:06.370738 kubelet[3105]: E0212 20:24:06.370608 3105 kuberuntime_manager.go:705] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"465dca2b-3292-462e-bbd5-a3a7982cda7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 12 20:24:06.370738 kubelet[3105]: E0212 20:24:06.370670 3105 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"465dca2b-3292-462e-bbd5-a3a7982cda7e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-787d4945fb-r94bf" podUID=465dca2b-3292-462e-bbd5-a3a7982cda7e Feb 12 20:24:06.628659 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f-shm.mount: Deactivated successfully. Feb 12 20:24:06.629055 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e-shm.mount: Deactivated successfully. Feb 12 20:24:14.543899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2104306034.mount: Deactivated successfully. Feb 12 20:24:14.628904 env[1824]: time="2024-02-12T20:24:14.628824417Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:14.632099 env[1824]: time="2024-02-12T20:24:14.632034293Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:14.635199 env[1824]: time="2024-02-12T20:24:14.635131948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:14.638684 env[1824]: time="2024-02-12T20:24:14.638628940Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:a45dffb21a0e9ca8962f36359a2ab776beeecd93843543c2fa1745d7bbb0f754,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:14.639184 env[1824]: time="2024-02-12T20:24:14.639136562Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.27.0\" returns image reference \"sha256:c445639cb28807ced09724016dc3b273b170b14d3b3d0c39b1affa1cc6b68774\"" Feb 12 20:24:14.670460 env[1824]: time="2024-02-12T20:24:14.670405337Z" level=info msg="CreateContainer within sandbox \"264b1c530a0a57d9ef1a99372fbb39213e391ba8b2449ed80b801b7e90cd3ca9\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 12 20:24:14.700993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3100379051.mount: Deactivated successfully. Feb 12 20:24:14.709500 env[1824]: time="2024-02-12T20:24:14.709402139Z" level=info msg="CreateContainer within sandbox \"264b1c530a0a57d9ef1a99372fbb39213e391ba8b2449ed80b801b7e90cd3ca9\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"af901d6703a0a98cfe9ca7d379bb6b0db9c836db509fea0f0b516342eee99ff1\"" Feb 12 20:24:14.711601 env[1824]: time="2024-02-12T20:24:14.710313982Z" level=info msg="StartContainer for \"af901d6703a0a98cfe9ca7d379bb6b0db9c836db509fea0f0b516342eee99ff1\"" Feb 12 20:24:14.825789 env[1824]: time="2024-02-12T20:24:14.825637670Z" level=info msg="StartContainer for \"af901d6703a0a98cfe9ca7d379bb6b0db9c836db509fea0f0b516342eee99ff1\" returns successfully" Feb 12 20:24:14.963599 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 12 20:24:14.963769 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 12 20:24:16.464920 kubelet[3105]: I0212 20:24:16.464873 3105 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 20:24:16.609576 kernel: kauditd_printk_skb: 8 callbacks suppressed Feb 12 20:24:16.609733 kernel: audit: type=1400 audit(1707769456.599:281): avc: denied { write } for pid=4279 comm="tee" name="fd" dev="proc" ino=21365 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:24:16.599000 audit[4279]: AVC avc: denied { write } for pid=4279 comm="tee" name="fd" dev="proc" ino=21365 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:24:16.599000 audit[4279]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffec6cc983 a2=241 a3=1b6 items=1 ppid=4249 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:16.599000 audit: CWD cwd="/etc/service/enabled/cni/log" Feb 12 20:24:16.635205 kernel: audit: type=1300 audit(1707769456.599:281): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffec6cc983 a2=241 a3=1b6 items=1 ppid=4249 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:16.635352 kernel: audit: type=1307 audit(1707769456.599:281): cwd="/etc/service/enabled/cni/log" Feb 12 20:24:16.599000 audit: PATH item=0 name="/dev/fd/63" inode=21811 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:16.643691 kernel: audit: type=1302 audit(1707769456.599:281): item=0 name="/dev/fd/63" inode=21811 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:16.599000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:24:16.652034 kernel: audit: type=1327 audit(1707769456.599:281): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:24:16.675000 audit[4289]: AVC avc: denied { write } for pid=4289 comm="tee" name="fd" dev="proc" ino=21378 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:24:16.675000 audit[4289]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcf825972 a2=241 a3=1b6 items=1 ppid=4261 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:16.694772 kernel: audit: type=1400 audit(1707769456.675:282): avc: denied { write } for pid=4289 comm="tee" name="fd" dev="proc" ino=21378 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:24:16.695009 kernel: audit: type=1300 audit(1707769456.675:282): arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcf825972 a2=241 a3=1b6 items=1 ppid=4261 pid=4289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:16.675000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Feb 12 20:24:16.725308 kernel: audit: type=1307 audit(1707769456.675:282): cwd="/etc/service/enabled/node-status-reporter/log" Feb 12 20:24:16.675000 audit: PATH item=0 name="/dev/fd/63" inode=21827 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:16.742142 kernel: audit: type=1302 audit(1707769456.675:282): item=0 name="/dev/fd/63" inode=21827 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:16.675000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:24:16.752464 kernel: audit: type=1327 audit(1707769456.675:282): proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:24:16.793000 audit[4297]: AVC avc: denied { write } for pid=4297 comm="tee" name="fd" dev="proc" ino=21386 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:24:16.793000 audit[4297]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffceb31971 a2=241 a3=1b6 items=1 ppid=4252 pid=4297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:16.793000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Feb 12 20:24:16.793000 audit: PATH item=0 name="/dev/fd/63" inode=21383 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:16.793000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:24:16.849000 audit[4305]: AVC avc: denied { write } for pid=4305 comm="tee" name="fd" dev="proc" ino=21891 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:24:16.849000 audit[4305]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff7097981 a2=241 a3=1b6 items=1 ppid=4263 pid=4305 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:16.849000 audit: CWD cwd="/etc/service/enabled/felix/log" Feb 12 20:24:16.849000 audit: PATH item=0 name="/dev/fd/63" inode=21837 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:16.849000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:24:16.859000 audit[4317]: AVC avc: denied { write } for pid=4317 comm="tee" name="fd" dev="proc" ino=21895 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:24:16.859000 audit[4317]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffefb7a981 a2=241 a3=1b6 items=1 ppid=4255 pid=4317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:16.859000 audit: CWD cwd="/etc/service/enabled/bird6/log" Feb 12 20:24:16.859000 audit: PATH item=0 name="/dev/fd/63" inode=21846 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:16.859000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:24:16.860000 audit[4315]: AVC avc: denied { write } for pid=4315 comm="tee" name="fd" dev="proc" ino=21899 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:24:16.860000 audit[4315]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffde4e0981 a2=241 a3=1b6 items=1 ppid=4257 pid=4315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:16.860000 audit: CWD cwd="/etc/service/enabled/confd/log" Feb 12 20:24:16.860000 audit: PATH item=0 name="/dev/fd/63" inode=21845 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:16.860000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:24:16.917000 audit[4320]: AVC avc: denied { write } for pid=4320 comm="tee" name="fd" dev="proc" ino=21404 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Feb 12 20:24:16.917000 audit[4320]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc66a5982 a2=241 a3=1b6 items=1 ppid=4251 pid=4320 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:16.917000 audit: CWD cwd="/etc/service/enabled/bird/log" Feb 12 20:24:16.917000 audit: PATH item=0 name="/dev/fd/63" inode=21850 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 12 20:24:16.917000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Feb 12 20:24:17.005068 env[1824]: time="2024-02-12T20:24:17.004910849Z" level=info msg="StopPodSandbox for \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\"" Feb 12 20:24:17.007109 env[1824]: time="2024-02-12T20:24:17.007036750Z" level=info msg="StopPodSandbox for \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\"" Feb 12 20:24:17.384940 kubelet[3105]: I0212 20:24:17.384779 3105 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-57b9r" podStartSLOduration=-9.223372010470053e+09 pod.CreationTimestamp="2024-02-12 20:23:51 +0000 UTC" firstStartedPulling="2024-02-12 20:23:52.422086528 +0000 UTC m=+25.130767089" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:15.321165159 +0000 UTC m=+48.029845732" watchObservedRunningTime="2024-02-12 20:24:17.384723453 +0000 UTC m=+50.093404014" Feb 12 20:24:17.488185 env[1824]: 2024-02-12 20:24:17.393 [INFO][4370] k8s.go 578: Cleaning up netns ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:17.488185 env[1824]: 2024-02-12 20:24:17.397 [INFO][4370] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" iface="eth0" netns="/var/run/netns/cni-f9c4833f-3336-bd28-5480-9c966ac6fb91" Feb 12 20:24:17.488185 env[1824]: 2024-02-12 20:24:17.399 [INFO][4370] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" iface="eth0" netns="/var/run/netns/cni-f9c4833f-3336-bd28-5480-9c966ac6fb91" Feb 12 20:24:17.488185 env[1824]: 2024-02-12 20:24:17.401 [INFO][4370] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" iface="eth0" netns="/var/run/netns/cni-f9c4833f-3336-bd28-5480-9c966ac6fb91" Feb 12 20:24:17.488185 env[1824]: 2024-02-12 20:24:17.403 [INFO][4370] k8s.go 585: Releasing IP address(es) ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:17.488185 env[1824]: 2024-02-12 20:24:17.403 [INFO][4370] utils.go 188: Calico CNI releasing IP address ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:17.488185 env[1824]: 2024-02-12 20:24:17.456 [INFO][4399] ipam_plugin.go 415: Releasing address using handleID ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" HandleID="k8s-pod-network.d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Workload="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:17.488185 env[1824]: 2024-02-12 20:24:17.456 [INFO][4399] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:17.488185 env[1824]: 2024-02-12 20:24:17.456 [INFO][4399] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:17.488185 env[1824]: 2024-02-12 20:24:17.471 [WARNING][4399] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" HandleID="k8s-pod-network.d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Workload="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:17.488185 env[1824]: 2024-02-12 20:24:17.471 [INFO][4399] ipam_plugin.go 443: Releasing address using workloadID ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" HandleID="k8s-pod-network.d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Workload="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:17.488185 env[1824]: 2024-02-12 20:24:17.473 [INFO][4399] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:17.488185 env[1824]: 2024-02-12 20:24:17.483 [INFO][4370] k8s.go 591: Teardown processing complete. ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:17.497037 env[1824]: time="2024-02-12T20:24:17.496169923Z" level=info msg="TearDown network for sandbox \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\" successfully" Feb 12 20:24:17.497037 env[1824]: time="2024-02-12T20:24:17.496227221Z" level=info msg="StopPodSandbox for \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\" returns successfully" Feb 12 20:24:17.495005 systemd[1]: run-netns-cni\x2df9c4833f\x2d3336\x2dbd28\x2d5480\x2d9c966ac6fb91.mount: Deactivated successfully. Feb 12 20:24:17.497768 env[1824]: time="2024-02-12T20:24:17.497450194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpmmd,Uid:8c257656-1c37-42c0-80d9-a7f2f0b7582d,Namespace:calico-system,Attempt:1,}" Feb 12 20:24:17.513969 env[1824]: 2024-02-12 20:24:17.385 [INFO][4372] k8s.go 578: Cleaning up netns ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:17.513969 env[1824]: 2024-02-12 20:24:17.385 [INFO][4372] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" iface="eth0" netns="/var/run/netns/cni-cc5d0691-05aa-4318-dc4f-9f151c8f2d28" Feb 12 20:24:17.513969 env[1824]: 2024-02-12 20:24:17.387 [INFO][4372] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" iface="eth0" netns="/var/run/netns/cni-cc5d0691-05aa-4318-dc4f-9f151c8f2d28" Feb 12 20:24:17.513969 env[1824]: 2024-02-12 20:24:17.389 [INFO][4372] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" iface="eth0" netns="/var/run/netns/cni-cc5d0691-05aa-4318-dc4f-9f151c8f2d28" Feb 12 20:24:17.513969 env[1824]: 2024-02-12 20:24:17.389 [INFO][4372] k8s.go 585: Releasing IP address(es) ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:17.513969 env[1824]: 2024-02-12 20:24:17.389 [INFO][4372] utils.go 188: Calico CNI releasing IP address ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:17.513969 env[1824]: 2024-02-12 20:24:17.466 [INFO][4394] ipam_plugin.go 415: Releasing address using handleID ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" HandleID="k8s-pod-network.eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:17.513969 env[1824]: 2024-02-12 20:24:17.466 [INFO][4394] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:17.513969 env[1824]: 2024-02-12 20:24:17.473 [INFO][4394] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:17.513969 env[1824]: 2024-02-12 20:24:17.487 [WARNING][4394] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" HandleID="k8s-pod-network.eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:17.513969 env[1824]: 2024-02-12 20:24:17.487 [INFO][4394] ipam_plugin.go 443: Releasing address using workloadID ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" HandleID="k8s-pod-network.eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:17.513969 env[1824]: 2024-02-12 20:24:17.498 [INFO][4394] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:17.513969 env[1824]: 2024-02-12 20:24:17.507 [INFO][4372] k8s.go 591: Teardown processing complete. ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:17.519918 env[1824]: time="2024-02-12T20:24:17.519328837Z" level=info msg="TearDown network for sandbox \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\" successfully" Feb 12 20:24:17.519918 env[1824]: time="2024-02-12T20:24:17.519385547Z" level=info msg="StopPodSandbox for \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\" returns successfully" Feb 12 20:24:17.518284 systemd[1]: run-netns-cni\x2dcc5d0691\x2d05aa\x2d4318\x2ddc4f\x2d9f151c8f2d28.mount: Deactivated successfully. Feb 12 20:24:17.520795 env[1824]: time="2024-02-12T20:24:17.520742688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hzrlr,Uid:2cbee1c0-7ffa-4607-96e5-c1d08a403936,Namespace:kube-system,Attempt:1,}" Feb 12 20:24:17.841886 (udev-worker)[4199]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:24:17.845972 systemd-networkd[1611]: cali921d47b398b: Link UP Feb 12 20:24:17.852083 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:24:17.852237 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali921d47b398b: link becomes ready Feb 12 20:24:17.852300 systemd-networkd[1611]: cali921d47b398b: Gained carrier Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.592 [INFO][4406] utils.go 100: File /var/lib/calico/mtu does not exist Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.620 [INFO][4406] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0 csi-node-driver- calico-system 8c257656-1c37-42c0-80d9-a7f2f0b7582d 678 0 2024-02-12 20:23:51 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7c77f88967 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-16-195 csi-node-driver-tpmmd eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali921d47b398b [] []}} ContainerID="aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" Namespace="calico-system" Pod="csi-node-driver-tpmmd" WorkloadEndpoint="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-" Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.620 [INFO][4406] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" Namespace="calico-system" Pod="csi-node-driver-tpmmd" WorkloadEndpoint="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.706 [INFO][4431] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" HandleID="k8s-pod-network.aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" Workload="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.734 [INFO][4431] ipam_plugin.go 268: Auto assigning IP ContainerID="aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" HandleID="k8s-pod-network.aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" Workload="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002bd730), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-195", "pod":"csi-node-driver-tpmmd", "timestamp":"2024-02-12 20:24:17.70631601 +0000 UTC"}, Hostname:"ip-172-31-16-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.734 [INFO][4431] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.734 [INFO][4431] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.734 [INFO][4431] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-195' Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.738 [INFO][4431] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" host="ip-172-31-16-195" Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.746 [INFO][4431] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-195" Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.753 [INFO][4431] ipam.go 489: Trying affinity for 192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.757 [INFO][4431] ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.761 [INFO][4431] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.761 [INFO][4431] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" host="ip-172-31-16-195" Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.767 [INFO][4431] ipam.go 1682: Creating new handle: k8s-pod-network.aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64 Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.778 [INFO][4431] ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" host="ip-172-31-16-195" Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.801 [INFO][4431] ipam.go 1216: Successfully claimed IPs: [192.168.126.1/26] block=192.168.126.0/26 handle="k8s-pod-network.aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" host="ip-172-31-16-195" Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.801 [INFO][4431] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.1/26] handle="k8s-pod-network.aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" host="ip-172-31-16-195" Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.802 [INFO][4431] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:17.887753 env[1824]: 2024-02-12 20:24:17.802 [INFO][4431] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.126.1/26] IPv6=[] ContainerID="aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" HandleID="k8s-pod-network.aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" Workload="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:17.891328 env[1824]: 2024-02-12 20:24:17.809 [INFO][4406] k8s.go 385: Populated endpoint ContainerID="aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" Namespace="calico-system" Pod="csi-node-driver-tpmmd" WorkloadEndpoint="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c257656-1c37-42c0-80d9-a7f2f0b7582d", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"", Pod:"csi-node-driver-tpmmd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali921d47b398b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:17.891328 env[1824]: 2024-02-12 20:24:17.809 [INFO][4406] k8s.go 386: Calico CNI using IPs: [192.168.126.1/32] ContainerID="aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" Namespace="calico-system" Pod="csi-node-driver-tpmmd" WorkloadEndpoint="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:17.891328 env[1824]: 2024-02-12 20:24:17.809 [INFO][4406] dataplane_linux.go 68: Setting the host side veth name to cali921d47b398b ContainerID="aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" Namespace="calico-system" Pod="csi-node-driver-tpmmd" WorkloadEndpoint="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:17.891328 env[1824]: 2024-02-12 20:24:17.853 [INFO][4406] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" Namespace="calico-system" Pod="csi-node-driver-tpmmd" WorkloadEndpoint="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:17.891328 env[1824]: 2024-02-12 20:24:17.853 [INFO][4406] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" Namespace="calico-system" Pod="csi-node-driver-tpmmd" WorkloadEndpoint="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c257656-1c37-42c0-80d9-a7f2f0b7582d", ResourceVersion:"678", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64", Pod:"csi-node-driver-tpmmd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali921d47b398b", MAC:"b6:f7:43:e7:9c:1f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:17.891328 env[1824]: 2024-02-12 20:24:17.875 [INFO][4406] k8s.go 491: Wrote updated endpoint to datastore ContainerID="aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64" Namespace="calico-system" Pod="csi-node-driver-tpmmd" WorkloadEndpoint="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:17.934481 env[1824]: time="2024-02-12T20:24:17.934306227Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:17.934918 env[1824]: time="2024-02-12T20:24:17.934836397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:17.935194 env[1824]: time="2024-02-12T20:24:17.935108730Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:17.935778 env[1824]: time="2024-02-12T20:24:17.935701731Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64 pid=4468 runtime=io.containerd.runc.v2 Feb 12 20:24:17.959767 systemd-networkd[1611]: cali5db6df91c7c: Link UP Feb 12 20:24:17.966281 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali5db6df91c7c: link becomes ready Feb 12 20:24:17.965213 (udev-worker)[4449]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:24:17.966854 systemd-networkd[1611]: cali5db6df91c7c: Gained carrier Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.637 [INFO][4411] utils.go 100: File /var/lib/calico/mtu does not exist Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.661 [INFO][4411] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0 coredns-787d4945fb- kube-system 2cbee1c0-7ffa-4607-96e5-c1d08a403936 677 0 2024-02-12 20:23:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-195 coredns-787d4945fb-hzrlr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5db6df91c7c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" Namespace="kube-system" Pod="coredns-787d4945fb-hzrlr" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-" Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.662 [INFO][4411] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" Namespace="kube-system" Pod="coredns-787d4945fb-hzrlr" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.827 [INFO][4437] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" HandleID="k8s-pod-network.847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.880 [INFO][4437] ipam_plugin.go 268: Auto assigning IP ContainerID="847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" HandleID="k8s-pod-network.847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003174b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-195", "pod":"coredns-787d4945fb-hzrlr", "timestamp":"2024-02-12 20:24:17.827741712 +0000 UTC"}, Hostname:"ip-172-31-16-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.881 [INFO][4437] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.881 [INFO][4437] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.881 [INFO][4437] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-195' Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.885 [INFO][4437] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" host="ip-172-31-16-195" Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.894 [INFO][4437] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-195" Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.901 [INFO][4437] ipam.go 489: Trying affinity for 192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.904 [INFO][4437] ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.909 [INFO][4437] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.909 [INFO][4437] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" host="ip-172-31-16-195" Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.912 [INFO][4437] ipam.go 1682: Creating new handle: k8s-pod-network.847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.919 [INFO][4437] ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" host="ip-172-31-16-195" Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.930 [INFO][4437] ipam.go 1216: Successfully claimed IPs: [192.168.126.2/26] block=192.168.126.0/26 handle="k8s-pod-network.847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" host="ip-172-31-16-195" Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.930 [INFO][4437] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.2/26] handle="k8s-pod-network.847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" host="ip-172-31-16-195" Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.931 [INFO][4437] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:18.063818 env[1824]: 2024-02-12 20:24:17.931 [INFO][4437] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.126.2/26] IPv6=[] ContainerID="847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" HandleID="k8s-pod-network.847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:18.065881 env[1824]: 2024-02-12 20:24:17.955 [INFO][4411] k8s.go 385: Populated endpoint ContainerID="847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" Namespace="kube-system" Pod="coredns-787d4945fb-hzrlr" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2cbee1c0-7ffa-4607-96e5-c1d08a403936", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"", Pod:"coredns-787d4945fb-hzrlr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5db6df91c7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:18.065881 env[1824]: 2024-02-12 20:24:17.956 [INFO][4411] k8s.go 386: Calico CNI using IPs: [192.168.126.2/32] ContainerID="847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" Namespace="kube-system" Pod="coredns-787d4945fb-hzrlr" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:18.065881 env[1824]: 2024-02-12 20:24:17.956 [INFO][4411] dataplane_linux.go 68: Setting the host side veth name to cali5db6df91c7c ContainerID="847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" Namespace="kube-system" Pod="coredns-787d4945fb-hzrlr" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:18.065881 env[1824]: 2024-02-12 20:24:17.969 [INFO][4411] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" Namespace="kube-system" Pod="coredns-787d4945fb-hzrlr" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:18.065881 env[1824]: 2024-02-12 20:24:17.969 [INFO][4411] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" Namespace="kube-system" Pod="coredns-787d4945fb-hzrlr" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2cbee1c0-7ffa-4607-96e5-c1d08a403936", ResourceVersion:"677", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea", Pod:"coredns-787d4945fb-hzrlr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5db6df91c7c", MAC:"8e:ec:3b:cb:22:e2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:18.065881 env[1824]: 2024-02-12 20:24:17.998 [INFO][4411] k8s.go 491: Wrote updated endpoint to datastore ContainerID="847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea" Namespace="kube-system" Pod="coredns-787d4945fb-hzrlr" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:18.128734 env[1824]: time="2024-02-12T20:24:18.126584487Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:18.128734 env[1824]: time="2024-02-12T20:24:18.126677544Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:18.128734 env[1824]: time="2024-02-12T20:24:18.126719591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:18.128734 env[1824]: time="2024-02-12T20:24:18.126968513Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea pid=4507 runtime=io.containerd.runc.v2 Feb 12 20:24:18.256184 env[1824]: time="2024-02-12T20:24:18.256127217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-tpmmd,Uid:8c257656-1c37-42c0-80d9-a7f2f0b7582d,Namespace:calico-system,Attempt:1,} returns sandbox id \"aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64\"" Feb 12 20:24:18.260312 env[1824]: time="2024-02-12T20:24:18.260245718Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\"" Feb 12 20:24:18.290885 env[1824]: time="2024-02-12T20:24:18.290819608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-hzrlr,Uid:2cbee1c0-7ffa-4607-96e5-c1d08a403936,Namespace:kube-system,Attempt:1,} returns sandbox id \"847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea\"" Feb 12 20:24:18.301789 env[1824]: time="2024-02-12T20:24:18.301727095Z" level=info msg="CreateContainer within sandbox \"847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:24:18.323390 env[1824]: time="2024-02-12T20:24:18.323287831Z" level=info msg="CreateContainer within sandbox \"847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"02d5d3f6e792eb63e695252e4f5c1ea995c0c80e51b1b869de31ec604889ea24\"" Feb 12 20:24:18.332928 env[1824]: time="2024-02-12T20:24:18.332858611Z" level=info msg="StartContainer for \"02d5d3f6e792eb63e695252e4f5c1ea995c0c80e51b1b869de31ec604889ea24\"" Feb 12 20:24:18.513402 env[1824]: time="2024-02-12T20:24:18.513321934Z" level=info msg="StartContainer for \"02d5d3f6e792eb63e695252e4f5c1ea995c0c80e51b1b869de31ec604889ea24\" returns successfully" Feb 12 20:24:18.859909 kubelet[3105]: I0212 20:24:18.859532 3105 prober_manager.go:287] "Failed to trigger a manual run" probe="Readiness" Feb 12 20:24:18.996000 audit[4642]: NETFILTER_CFG table=filter:109 family=2 entries=13 op=nft_register_rule pid=4642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:18.996000 audit[4642]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffe58fbf00 a2=0 a3=ffff830fb6c0 items=0 ppid=3263 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:18.996000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:19.002000 audit[4642]: NETFILTER_CFG table=nat:110 family=2 entries=27 op=nft_register_chain pid=4642 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:19.002000 audit[4642]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffe58fbf00 a2=0 a3=ffff830fb6c0 items=0 ppid=3263 pid=4642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:19.002000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:19.324894 systemd-networkd[1611]: cali921d47b398b: Gained IPv6LL Feb 12 20:24:19.349134 kubelet[3105]: I0212 20:24:19.349073 3105 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-hzrlr" podStartSLOduration=37.348995513 pod.CreationTimestamp="2024-02-12 20:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:19.318617177 +0000 UTC m=+52.027297750" watchObservedRunningTime="2024-02-12 20:24:19.348995513 +0000 UTC m=+52.057676086" Feb 12 20:24:19.432000 audit[4668]: NETFILTER_CFG table=filter:111 family=2 entries=12 op=nft_register_rule pid=4668 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:19.432000 audit[4668]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffda275960 a2=0 a3=ffff8144b6c0 items=0 ppid=3263 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:19.432000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:19.433000 audit[4668]: NETFILTER_CFG table=nat:112 family=2 entries=30 op=nft_register_rule pid=4668 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:19.433000 audit[4668]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8836 a0=3 a1=ffffda275960 a2=0 a3=ffff8144b6c0 items=0 ppid=3263 pid=4668 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:19.433000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:19.773808 systemd-networkd[1611]: cali5db6df91c7c: Gained IPv6LL Feb 12 20:24:19.961404 env[1824]: time="2024-02-12T20:24:19.961336346Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:19.965206 env[1824]: time="2024-02-12T20:24:19.965152253Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:19.969859 env[1824]: time="2024-02-12T20:24:19.968234881Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:19.972866 env[1824]: time="2024-02-12T20:24:19.971644238Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:2b9021393c17e87ba8a3c89f5b3719941812f4e4751caa0b71eb2233bff48738,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:19.973170 env[1824]: time="2024-02-12T20:24:19.972820317Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.27.0\" returns image reference \"sha256:4b71e7439e0eba34a97844591560a009f37e8e6c17a386a34d416c1cc872dee8\"" Feb 12 20:24:19.977262 env[1824]: time="2024-02-12T20:24:19.977204698Z" level=info msg="CreateContainer within sandbox \"aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 12 20:24:20.012975 env[1824]: time="2024-02-12T20:24:20.012506676Z" level=info msg="StopPodSandbox for \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\"" Feb 12 20:24:20.049246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1681369659.mount: Deactivated successfully. Feb 12 20:24:20.083423 env[1824]: time="2024-02-12T20:24:20.083338786Z" level=info msg="CreateContainer within sandbox \"aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"5920dec4e94ec754cff12480180297683be3fb1af27d9250b777642581ca1bce\"" Feb 12 20:24:20.086858 env[1824]: time="2024-02-12T20:24:20.086805047Z" level=info msg="StartContainer for \"5920dec4e94ec754cff12480180297683be3fb1af27d9250b777642581ca1bce\"" Feb 12 20:24:20.228883 systemd[1]: run-containerd-runc-k8s.io-5920dec4e94ec754cff12480180297683be3fb1af27d9250b777642581ca1bce-runc.bMEg51.mount: Deactivated successfully. Feb 12 20:24:20.443712 env[1824]: 2024-02-12 20:24:20.296 [INFO][4723] k8s.go 578: Cleaning up netns ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:20.443712 env[1824]: 2024-02-12 20:24:20.297 [INFO][4723] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" iface="eth0" netns="/var/run/netns/cni-4db7333e-0674-d696-9e77-eec55b56d0f0" Feb 12 20:24:20.443712 env[1824]: 2024-02-12 20:24:20.297 [INFO][4723] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" iface="eth0" netns="/var/run/netns/cni-4db7333e-0674-d696-9e77-eec55b56d0f0" Feb 12 20:24:20.443712 env[1824]: 2024-02-12 20:24:20.297 [INFO][4723] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" iface="eth0" netns="/var/run/netns/cni-4db7333e-0674-d696-9e77-eec55b56d0f0" Feb 12 20:24:20.443712 env[1824]: 2024-02-12 20:24:20.298 [INFO][4723] k8s.go 585: Releasing IP address(es) ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:20.443712 env[1824]: 2024-02-12 20:24:20.298 [INFO][4723] utils.go 188: Calico CNI releasing IP address ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:20.443712 env[1824]: 2024-02-12 20:24:20.402 [INFO][4771] ipam_plugin.go 415: Releasing address using handleID ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" HandleID="k8s-pod-network.7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Workload="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:20.443712 env[1824]: 2024-02-12 20:24:20.402 [INFO][4771] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:20.443712 env[1824]: 2024-02-12 20:24:20.403 [INFO][4771] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:20.443712 env[1824]: 2024-02-12 20:24:20.435 [WARNING][4771] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" HandleID="k8s-pod-network.7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Workload="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:20.443712 env[1824]: 2024-02-12 20:24:20.435 [INFO][4771] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" HandleID="k8s-pod-network.7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Workload="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:20.443712 env[1824]: 2024-02-12 20:24:20.438 [INFO][4771] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:20.443712 env[1824]: 2024-02-12 20:24:20.441 [INFO][4723] k8s.go 591: Teardown processing complete. ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:20.445884 env[1824]: time="2024-02-12T20:24:20.445791901Z" level=info msg="TearDown network for sandbox \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\" successfully" Feb 12 20:24:20.445884 env[1824]: time="2024-02-12T20:24:20.445872744Z" level=info msg="StopPodSandbox for \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\" returns successfully" Feb 12 20:24:20.446865 env[1824]: time="2024-02-12T20:24:20.446814841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69855554b9-7fn7j,Uid:00bc6aea-403c-4dfd-a949-09ac35a4157e,Namespace:calico-system,Attempt:1,}" Feb 12 20:24:20.609338 env[1824]: time="2024-02-12T20:24:20.609274526Z" level=info msg="StartContainer for \"5920dec4e94ec754cff12480180297683be3fb1af27d9250b777642581ca1bce\" returns successfully" Feb 12 20:24:20.614087 env[1824]: time="2024-02-12T20:24:20.614028404Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\"" Feb 12 20:24:20.731000 audit[4843]: NETFILTER_CFG table=filter:113 family=2 entries=9 op=nft_register_rule pid=4843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:20.731000 audit[4843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffffbc5c660 a2=0 a3=ffffa9ad16c0 items=0 ppid=3263 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.731000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:20.738000 audit[4843]: NETFILTER_CFG table=nat:114 family=2 entries=51 op=nft_register_chain pid=4843 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:20.738000 audit[4843]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=fffffbc5c660 a2=0 a3=ffffa9ad16c0 items=0 ppid=3263 pid=4843 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.738000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:20.801169 systemd-networkd[1611]: cali587fd09eed7: Link UP Feb 12 20:24:20.806073 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:24:20.806244 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali587fd09eed7: link becomes ready Feb 12 20:24:20.806500 systemd-networkd[1611]: cali587fd09eed7: Gained carrier Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.614 [INFO][4796] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0 calico-kube-controllers-69855554b9- calico-system 00bc6aea-403c-4dfd-a949-09ac35a4157e 712 0 2024-02-12 20:23:51 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:69855554b9 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-16-195 calico-kube-controllers-69855554b9-7fn7j eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali587fd09eed7 [] []}} ContainerID="64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" Namespace="calico-system" Pod="calico-kube-controllers-69855554b9-7fn7j" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-" Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.614 [INFO][4796] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" Namespace="calico-system" Pod="calico-kube-controllers-69855554b9-7fn7j" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.726 [INFO][4833] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" HandleID="k8s-pod-network.64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" Workload="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.747 [INFO][4833] ipam_plugin.go 268: Auto assigning IP ContainerID="64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" HandleID="k8s-pod-network.64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" Workload="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002bca30), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-16-195", "pod":"calico-kube-controllers-69855554b9-7fn7j", "timestamp":"2024-02-12 20:24:20.726046216 +0000 UTC"}, Hostname:"ip-172-31-16-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.747 [INFO][4833] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.747 [INFO][4833] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.747 [INFO][4833] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-195' Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.750 [INFO][4833] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" host="ip-172-31-16-195" Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.757 [INFO][4833] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-195" Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.764 [INFO][4833] ipam.go 489: Trying affinity for 192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.767 [INFO][4833] ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.771 [INFO][4833] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.771 [INFO][4833] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" host="ip-172-31-16-195" Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.774 [INFO][4833] ipam.go 1682: Creating new handle: k8s-pod-network.64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.781 [INFO][4833] ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" host="ip-172-31-16-195" Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.794 [INFO][4833] ipam.go 1216: Successfully claimed IPs: [192.168.126.3/26] block=192.168.126.0/26 handle="k8s-pod-network.64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" host="ip-172-31-16-195" Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.794 [INFO][4833] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.3/26] handle="k8s-pod-network.64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" host="ip-172-31-16-195" Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.794 [INFO][4833] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:20.831176 env[1824]: 2024-02-12 20:24:20.794 [INFO][4833] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.126.3/26] IPv6=[] ContainerID="64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" HandleID="k8s-pod-network.64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" Workload="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:20.832682 env[1824]: 2024-02-12 20:24:20.797 [INFO][4796] k8s.go 385: Populated endpoint ContainerID="64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" Namespace="calico-system" Pod="calico-kube-controllers-69855554b9-7fn7j" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0", GenerateName:"calico-kube-controllers-69855554b9-", Namespace:"calico-system", SelfLink:"", UID:"00bc6aea-403c-4dfd-a949-09ac35a4157e", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69855554b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"", Pod:"calico-kube-controllers-69855554b9-7fn7j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali587fd09eed7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:20.832682 env[1824]: 2024-02-12 20:24:20.797 [INFO][4796] k8s.go 386: Calico CNI using IPs: [192.168.126.3/32] ContainerID="64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" Namespace="calico-system" Pod="calico-kube-controllers-69855554b9-7fn7j" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:20.832682 env[1824]: 2024-02-12 20:24:20.797 [INFO][4796] dataplane_linux.go 68: Setting the host side veth name to cali587fd09eed7 ContainerID="64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" Namespace="calico-system" Pod="calico-kube-controllers-69855554b9-7fn7j" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:20.832682 env[1824]: 2024-02-12 20:24:20.806 [INFO][4796] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" Namespace="calico-system" Pod="calico-kube-controllers-69855554b9-7fn7j" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:20.832682 env[1824]: 2024-02-12 20:24:20.809 [INFO][4796] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" Namespace="calico-system" Pod="calico-kube-controllers-69855554b9-7fn7j" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0", GenerateName:"calico-kube-controllers-69855554b9-", Namespace:"calico-system", SelfLink:"", UID:"00bc6aea-403c-4dfd-a949-09ac35a4157e", ResourceVersion:"712", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69855554b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c", Pod:"calico-kube-controllers-69855554b9-7fn7j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali587fd09eed7", MAC:"6a:97:46:7b:1a:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:20.832682 env[1824]: 2024-02-12 20:24:20.826 [INFO][4796] k8s.go 491: Wrote updated endpoint to datastore ContainerID="64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c" Namespace="calico-system" Pod="calico-kube-controllers-69855554b9-7fn7j" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:20.874356 env[1824]: time="2024-02-12T20:24:20.873102290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:20.874356 env[1824]: time="2024-02-12T20:24:20.873193583Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:20.874356 env[1824]: time="2024-02-12T20:24:20.873220931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:20.874789 env[1824]: time="2024-02-12T20:24:20.874717355Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c pid=4863 runtime=io.containerd.runc.v2 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit: BPF prog-id=10 op=LOAD Feb 12 20:24:20.926000 audit[4891]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd21d60d8 a2=70 a3=0 items=0 ppid=4673 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.926000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:24:20.926000 audit: BPF prog-id=10 op=UNLOAD Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit: BPF prog-id=11 op=LOAD Feb 12 20:24:20.926000 audit[4891]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd21d60d8 a2=70 a3=4a174c items=0 ppid=4673 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.926000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:24:20.926000 audit: BPF prog-id=11 op=UNLOAD Feb 12 20:24:20.926000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.926000 audit[4891]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=0 a1=ffffd21d6108 a2=70 a3=1bd1473f items=0 ppid=4673 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.926000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:24:20.927000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.927000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.927000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.927000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.927000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.927000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.927000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.927000 audit[4891]: AVC avc: denied { perfmon } for pid=4891 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.927000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.927000 audit[4891]: AVC avc: denied { bpf } for pid=4891 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.927000 audit: BPF prog-id=12 op=LOAD Feb 12 20:24:20.927000 audit[4891]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd21d6058 a2=70 a3=1bd14759 items=0 ppid=4673 pid=4891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.927000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Feb 12 20:24:20.976000 audit[4902]: AVC avc: denied { bpf } for pid=4902 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.976000 audit[4902]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffa1cb3e8 a2=70 a3=0 items=0 ppid=4673 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.976000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 20:24:20.977000 audit[4902]: AVC avc: denied { bpf } for pid=4902 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Feb 12 20:24:20.977000 audit[4902]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=fffffa1cb2c8 a2=70 a3=2 items=0 ppid=4673 pid=4902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:20.977000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Feb 12 20:24:21.004661 systemd-networkd[1611]: vxlan.calico: Link UP Feb 12 20:24:21.004683 systemd-networkd[1611]: vxlan.calico: Gained carrier Feb 12 20:24:21.016713 env[1824]: time="2024-02-12T20:24:21.010728862Z" level=info msg="StopPodSandbox for \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\"" Feb 12 20:24:21.025000 audit: BPF prog-id=12 op=UNLOAD Feb 12 20:24:21.035535 systemd[1]: run-netns-cni\x2d4db7333e\x2d0674\x2dd696\x2d9e77\x2deec55b56d0f0.mount: Deactivated successfully. Feb 12 20:24:21.069745 (udev-worker)[4918]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:24:21.207567 env[1824]: time="2024-02-12T20:24:21.207364459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-69855554b9-7fn7j,Uid:00bc6aea-403c-4dfd-a949-09ac35a4157e,Namespace:calico-system,Attempt:1,} returns sandbox id \"64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c\"" Feb 12 20:24:21.212000 audit[4961]: NETFILTER_CFG table=mangle:115 family=2 entries=19 op=nft_register_chain pid=4961 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:24:21.212000 audit[4961]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6800 a0=3 a1=fffffd6b3580 a2=0 a3=ffffa8383fa8 items=0 ppid=4673 pid=4961 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:21.212000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:24:21.213000 audit[4951]: NETFILTER_CFG table=raw:116 family=2 entries=19 op=nft_register_chain pid=4951 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:24:21.213000 audit[4951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6132 a0=3 a1=fffffe270350 a2=0 a3=ffff9b87ffa8 items=0 ppid=4673 pid=4951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:21.213000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:24:21.245000 audit[4962]: NETFILTER_CFG table=nat:117 family=2 entries=16 op=nft_register_chain pid=4962 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:24:21.245000 audit[4962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5188 a0=3 a1=ffffe593ef40 a2=0 a3=ffff8aca7fa8 items=0 ppid=4673 pid=4962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:21.245000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:24:21.254000 audit[4963]: NETFILTER_CFG table=filter:118 family=2 entries=103 op=nft_register_chain pid=4963 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:24:21.254000 audit[4963]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=54800 a0=3 a1=ffffdc2b8850 a2=0 a3=ffffa795cfa8 items=0 ppid=4673 pid=4963 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:21.254000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:24:21.321000 audit[4979]: NETFILTER_CFG table=filter:119 family=2 entries=44 op=nft_register_chain pid=4979 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:24:21.321000 audit[4979]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=22360 a0=3 a1=ffffeb3efc70 a2=0 a3=ffff849dbfa8 items=0 ppid=4673 pid=4979 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:21.321000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:24:21.376710 env[1824]: 2024-02-12 20:24:21.265 [INFO][4931] k8s.go 578: Cleaning up netns ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:21.376710 env[1824]: 2024-02-12 20:24:21.265 [INFO][4931] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" iface="eth0" netns="/var/run/netns/cni-1d09ae63-1777-2cc3-e781-8a71eaace60b" Feb 12 20:24:21.376710 env[1824]: 2024-02-12 20:24:21.266 [INFO][4931] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" iface="eth0" netns="/var/run/netns/cni-1d09ae63-1777-2cc3-e781-8a71eaace60b" Feb 12 20:24:21.376710 env[1824]: 2024-02-12 20:24:21.266 [INFO][4931] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" iface="eth0" netns="/var/run/netns/cni-1d09ae63-1777-2cc3-e781-8a71eaace60b" Feb 12 20:24:21.376710 env[1824]: 2024-02-12 20:24:21.266 [INFO][4931] k8s.go 585: Releasing IP address(es) ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:21.376710 env[1824]: 2024-02-12 20:24:21.266 [INFO][4931] utils.go 188: Calico CNI releasing IP address ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:21.376710 env[1824]: 2024-02-12 20:24:21.352 [INFO][4970] ipam_plugin.go 415: Releasing address using handleID ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" HandleID="k8s-pod-network.f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:21.376710 env[1824]: 2024-02-12 20:24:21.352 [INFO][4970] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:21.376710 env[1824]: 2024-02-12 20:24:21.353 [INFO][4970] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:21.376710 env[1824]: 2024-02-12 20:24:21.368 [WARNING][4970] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" HandleID="k8s-pod-network.f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:21.376710 env[1824]: 2024-02-12 20:24:21.368 [INFO][4970] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" HandleID="k8s-pod-network.f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:21.376710 env[1824]: 2024-02-12 20:24:21.371 [INFO][4970] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:21.376710 env[1824]: 2024-02-12 20:24:21.373 [INFO][4931] k8s.go 591: Teardown processing complete. ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:21.381368 systemd[1]: run-netns-cni\x2d1d09ae63\x2d1777\x2d2cc3\x2de781\x2d8a71eaace60b.mount: Deactivated successfully. Feb 12 20:24:21.383870 env[1824]: time="2024-02-12T20:24:21.383799992Z" level=info msg="TearDown network for sandbox \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\" successfully" Feb 12 20:24:21.384036 env[1824]: time="2024-02-12T20:24:21.384003556Z" level=info msg="StopPodSandbox for \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\" returns successfully" Feb 12 20:24:21.385228 env[1824]: time="2024-02-12T20:24:21.385155313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r94bf,Uid:465dca2b-3292-462e-bbd5-a3a7982cda7e,Namespace:kube-system,Attempt:1,}" Feb 12 20:24:21.631177 (udev-worker)[4915]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:24:21.636664 systemd-networkd[1611]: calib6dab6b0f17: Link UP Feb 12 20:24:21.640234 systemd-networkd[1611]: calib6dab6b0f17: Gained carrier Feb 12 20:24:21.640747 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib6dab6b0f17: link becomes ready Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.483 [INFO][4982] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0 coredns-787d4945fb- kube-system 465dca2b-3292-462e-bbd5-a3a7982cda7e 725 0 2024-02-12 20:23:42 +0000 UTC map[k8s-app:kube-dns pod-template-hash:787d4945fb projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-16-195 coredns-787d4945fb-r94bf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib6dab6b0f17 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" Namespace="kube-system" Pod="coredns-787d4945fb-r94bf" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-" Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.483 [INFO][4982] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" Namespace="kube-system" Pod="coredns-787d4945fb-r94bf" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.550 [INFO][4993] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" HandleID="k8s-pod-network.ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.571 [INFO][4993] ipam_plugin.go 268: Auto assigning IP ContainerID="ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" HandleID="k8s-pod-network.ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b38d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-16-195", "pod":"coredns-787d4945fb-r94bf", "timestamp":"2024-02-12 20:24:21.550853077 +0000 UTC"}, Hostname:"ip-172-31-16-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.571 [INFO][4993] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.571 [INFO][4993] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.571 [INFO][4993] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-195' Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.574 [INFO][4993] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" host="ip-172-31-16-195" Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.587 [INFO][4993] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-195" Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.594 [INFO][4993] ipam.go 489: Trying affinity for 192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.597 [INFO][4993] ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.603 [INFO][4993] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.604 [INFO][4993] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" host="ip-172-31-16-195" Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.607 [INFO][4993] ipam.go 1682: Creating new handle: k8s-pod-network.ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4 Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.615 [INFO][4993] ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" host="ip-172-31-16-195" Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.624 [INFO][4993] ipam.go 1216: Successfully claimed IPs: [192.168.126.4/26] block=192.168.126.0/26 handle="k8s-pod-network.ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" host="ip-172-31-16-195" Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.624 [INFO][4993] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.4/26] handle="k8s-pod-network.ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" host="ip-172-31-16-195" Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.625 [INFO][4993] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:21.665968 env[1824]: 2024-02-12 20:24:21.625 [INFO][4993] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.126.4/26] IPv6=[] ContainerID="ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" HandleID="k8s-pod-network.ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:21.667236 env[1824]: 2024-02-12 20:24:21.628 [INFO][4982] k8s.go 385: Populated endpoint ContainerID="ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" Namespace="kube-system" Pod="coredns-787d4945fb-r94bf" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"465dca2b-3292-462e-bbd5-a3a7982cda7e", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"", Pod:"coredns-787d4945fb-r94bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6dab6b0f17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:21.667236 env[1824]: 2024-02-12 20:24:21.629 [INFO][4982] k8s.go 386: Calico CNI using IPs: [192.168.126.4/32] ContainerID="ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" Namespace="kube-system" Pod="coredns-787d4945fb-r94bf" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:21.667236 env[1824]: 2024-02-12 20:24:21.629 [INFO][4982] dataplane_linux.go 68: Setting the host side veth name to calib6dab6b0f17 ContainerID="ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" Namespace="kube-system" Pod="coredns-787d4945fb-r94bf" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:21.667236 env[1824]: 2024-02-12 20:24:21.641 [INFO][4982] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" Namespace="kube-system" Pod="coredns-787d4945fb-r94bf" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:21.667236 env[1824]: 2024-02-12 20:24:21.642 [INFO][4982] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" Namespace="kube-system" Pod="coredns-787d4945fb-r94bf" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"465dca2b-3292-462e-bbd5-a3a7982cda7e", ResourceVersion:"725", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4", Pod:"coredns-787d4945fb-r94bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6dab6b0f17", MAC:"b2:3a:9b:03:9f:2d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:21.667236 env[1824]: 2024-02-12 20:24:21.659 [INFO][4982] k8s.go 491: Wrote updated endpoint to datastore ContainerID="ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4" Namespace="kube-system" Pod="coredns-787d4945fb-r94bf" WorkloadEndpoint="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:21.704406 kernel: kauditd_printk_skb: 107 callbacks suppressed Feb 12 20:24:21.704585 kernel: audit: type=1325 audit(1707769461.695:308): table=filter:120 family=2 entries=34 op=nft_register_chain pid=5013 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:24:21.695000 audit[5013]: NETFILTER_CFG table=filter:120 family=2 entries=34 op=nft_register_chain pid=5013 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:24:21.704741 env[1824]: time="2024-02-12T20:24:21.700157293Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:24:21.704741 env[1824]: time="2024-02-12T20:24:21.700311214Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:24:21.704741 env[1824]: time="2024-02-12T20:24:21.700374536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:24:21.704741 env[1824]: time="2024-02-12T20:24:21.700974054Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4 pid=5021 runtime=io.containerd.runc.v2 Feb 12 20:24:21.695000 audit[5013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=17884 a0=3 a1=ffffc3e77f80 a2=0 a3=ffffa9948fa8 items=0 ppid=4673 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:21.695000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:24:21.737299 kernel: audit: type=1300 audit(1707769461.695:308): arch=c00000b7 syscall=211 success=yes exit=17884 a0=3 a1=ffffc3e77f80 a2=0 a3=ffffa9948fa8 items=0 ppid=4673 pid=5013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:21.737442 kernel: audit: type=1327 audit(1707769461.695:308): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:24:21.843143 env[1824]: time="2024-02-12T20:24:21.843061207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-r94bf,Uid:465dca2b-3292-462e-bbd5-a3a7982cda7e,Namespace:kube-system,Attempt:1,} returns sandbox id \"ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4\"" Feb 12 20:24:21.852672 env[1824]: time="2024-02-12T20:24:21.852613775Z" level=info msg="CreateContainer within sandbox \"ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 12 20:24:21.892224 env[1824]: time="2024-02-12T20:24:21.889309985Z" level=info msg="CreateContainer within sandbox \"ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"42b511cef53262adb828e10cfee6f606b935142d723bebbd05bc1117eddfc858\"" Feb 12 20:24:21.893805 env[1824]: time="2024-02-12T20:24:21.893753325Z" level=info msg="StartContainer for \"42b511cef53262adb828e10cfee6f606b935142d723bebbd05bc1117eddfc858\"" Feb 12 20:24:22.000047 env[1824]: time="2024-02-12T20:24:21.998792016Z" level=info msg="StartContainer for \"42b511cef53262adb828e10cfee6f606b935142d723bebbd05bc1117eddfc858\" returns successfully" Feb 12 20:24:22.415237 kubelet[3105]: I0212 20:24:22.413832 3105 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-r94bf" podStartSLOduration=40.413701583 pod.CreationTimestamp="2024-02-12 20:23:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:22.372611703 +0000 UTC m=+55.081292312" watchObservedRunningTime="2024-02-12 20:24:22.413701583 +0000 UTC m=+55.122382168" Feb 12 20:24:22.697463 env[1824]: time="2024-02-12T20:24:22.697295826Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:22.699000 audit[5122]: NETFILTER_CFG table=filter:121 family=2 entries=6 op=nft_register_rule pid=5122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:22.699000 audit[5122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe581e920 a2=0 a3=ffff8e24c6c0 items=0 ppid=3263 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:22.719533 kernel: audit: type=1325 audit(1707769462.699:309): table=filter:121 family=2 entries=6 op=nft_register_rule pid=5122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:22.719705 kernel: audit: type=1300 audit(1707769462.699:309): arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe581e920 a2=0 a3=ffff8e24c6c0 items=0 ppid=3263 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:22.699000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:22.727172 kernel: audit: type=1327 audit(1707769462.699:309): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:22.727636 env[1824]: time="2024-02-12T20:24:22.727586865Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:22.731714 env[1824]: time="2024-02-12T20:24:22.731643768Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:22.735986 env[1824]: time="2024-02-12T20:24:22.735914495Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:45a7aba6020a7cf7b866cb8a8d481b30c97e9b3407e1459aaa65a5b4cc06633a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:22.738789 env[1824]: time="2024-02-12T20:24:22.737663695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.27.0\" returns image reference \"sha256:9dbda087e98c46610fb8629cf530f1fe49eee4b17d2afe455664ca446ec39d43\"" Feb 12 20:24:22.740229 env[1824]: time="2024-02-12T20:24:22.740154550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\"" Feb 12 20:24:22.742849 env[1824]: time="2024-02-12T20:24:22.742770538Z" level=info msg="CreateContainer within sandbox \"aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 12 20:24:22.750000 audit[5122]: NETFILTER_CFG table=nat:122 family=2 entries=60 op=nft_register_rule pid=5122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:22.750000 audit[5122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=ffffe581e920 a2=0 a3=ffff8e24c6c0 items=0 ppid=3263 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:22.772776 kernel: audit: type=1325 audit(1707769462.750:310): table=nat:122 family=2 entries=60 op=nft_register_rule pid=5122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:22.772925 kernel: audit: type=1300 audit(1707769462.750:310): arch=c00000b7 syscall=211 success=yes exit=19324 a0=3 a1=ffffe581e920 a2=0 a3=ffff8e24c6c0 items=0 ppid=3263 pid=5122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:22.750000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:22.779795 kernel: audit: type=1327 audit(1707769462.750:310): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:22.780787 systemd-networkd[1611]: cali587fd09eed7: Gained IPv6LL Feb 12 20:24:22.806827 env[1824]: time="2024-02-12T20:24:22.806760383Z" level=info msg="CreateContainer within sandbox \"aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"acda12c52c0d50505ca8666897ddf566540eaaac2628610eb3a1574e8f16e2d0\"" Feb 12 20:24:22.807805 env[1824]: time="2024-02-12T20:24:22.807723565Z" level=info msg="StartContainer for \"acda12c52c0d50505ca8666897ddf566540eaaac2628610eb3a1574e8f16e2d0\"" Feb 12 20:24:22.853452 systemd-networkd[1611]: vxlan.calico: Gained IPv6LL Feb 12 20:24:23.024341 systemd[1]: run-containerd-runc-k8s.io-acda12c52c0d50505ca8666897ddf566540eaaac2628610eb3a1574e8f16e2d0-runc.pHvrq7.mount: Deactivated successfully. Feb 12 20:24:23.036816 systemd-networkd[1611]: calib6dab6b0f17: Gained IPv6LL Feb 12 20:24:23.041000 audit[5172]: NETFILTER_CFG table=filter:123 family=2 entries=6 op=nft_register_rule pid=5172 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:23.049614 kernel: audit: type=1325 audit(1707769463.041:311): table=filter:123 family=2 entries=6 op=nft_register_rule pid=5172 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:23.041000 audit[5172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=fffff3d8cc80 a2=0 a3=ffff849e36c0 items=0 ppid=3263 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:23.041000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:23.064000 audit[5172]: NETFILTER_CFG table=nat:124 family=2 entries=72 op=nft_register_chain pid=5172 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:24:23.064000 audit[5172]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffff3d8cc80 a2=0 a3=ffff849e36c0 items=0 ppid=3263 pid=5172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:23.064000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:24:23.139771 env[1824]: time="2024-02-12T20:24:23.139705055Z" level=info msg="StartContainer for \"acda12c52c0d50505ca8666897ddf566540eaaac2628610eb3a1574e8f16e2d0\" returns successfully" Feb 12 20:24:23.989284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1920514079.mount: Deactivated successfully. Feb 12 20:24:24.135607 kubelet[3105]: I0212 20:24:24.135463 3105 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 12 20:24:24.135607 kubelet[3105]: I0212 20:24:24.135568 3105 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 12 20:24:25.539839 env[1824]: time="2024-02-12T20:24:25.539781590Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:25.542715 env[1824]: time="2024-02-12T20:24:25.542659225Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:25.547428 env[1824]: time="2024-02-12T20:24:25.547374092Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:25.552593 env[1824]: time="2024-02-12T20:24:25.550973187Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:e264ab1fb2f1ae90dd1d84e226d11d2eb4350e74ac27de4c65f29f5aadba5bb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:24:25.554374 env[1824]: time="2024-02-12T20:24:25.553255118Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.27.0\" returns image reference \"sha256:094645649618376e48b5ec13a94a164d53dbdf819b7ab644f080b751f24560c8\"" Feb 12 20:24:25.573579 env[1824]: time="2024-02-12T20:24:25.567177676Z" level=info msg="CreateContainer within sandbox \"64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Feb 12 20:24:25.606973 env[1824]: time="2024-02-12T20:24:25.606905598Z" level=info msg="CreateContainer within sandbox \"64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f75fe05dfa83a08f915c792e63d3e189bf2ff83990a680a9c14bf480a17197d9\"" Feb 12 20:24:25.611964 env[1824]: time="2024-02-12T20:24:25.611869555Z" level=info msg="StartContainer for \"f75fe05dfa83a08f915c792e63d3e189bf2ff83990a680a9c14bf480a17197d9\"" Feb 12 20:24:25.821358 env[1824]: time="2024-02-12T20:24:25.821180667Z" level=info msg="StartContainer for \"f75fe05dfa83a08f915c792e63d3e189bf2ff83990a680a9c14bf480a17197d9\" returns successfully" Feb 12 20:24:26.411222 kubelet[3105]: I0212 20:24:26.405907 3105 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-tpmmd" podStartSLOduration=-9.223372001448933e+09 pod.CreationTimestamp="2024-02-12 20:23:51 +0000 UTC" firstStartedPulling="2024-02-12 20:24:18.259224556 +0000 UTC m=+50.967905117" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:23.38264795 +0000 UTC m=+56.091328523" watchObservedRunningTime="2024-02-12 20:24:26.405842216 +0000 UTC m=+59.114522777" Feb 12 20:24:26.550089 kubelet[3105]: I0212 20:24:26.550021 3105 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-69855554b9-7fn7j" podStartSLOduration=-9.223372001304813e+09 pod.CreationTimestamp="2024-02-12 20:23:51 +0000 UTC" firstStartedPulling="2024-02-12 20:24:21.217531665 +0000 UTC m=+53.926212226" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:24:26.406356117 +0000 UTC m=+59.115036714" watchObservedRunningTime="2024-02-12 20:24:26.549962894 +0000 UTC m=+59.258643455" Feb 12 20:24:27.670735 env[1824]: time="2024-02-12T20:24:27.670642351Z" level=info msg="StopPodSandbox for \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\"" Feb 12 20:24:27.838339 env[1824]: 2024-02-12 20:24:27.762 [WARNING][5260] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0", GenerateName:"calico-kube-controllers-69855554b9-", Namespace:"calico-system", SelfLink:"", UID:"00bc6aea-403c-4dfd-a949-09ac35a4157e", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69855554b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c", Pod:"calico-kube-controllers-69855554b9-7fn7j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali587fd09eed7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:27.838339 env[1824]: 2024-02-12 20:24:27.763 [INFO][5260] k8s.go 578: Cleaning up netns ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:27.838339 env[1824]: 2024-02-12 20:24:27.763 [INFO][5260] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" iface="eth0" netns="" Feb 12 20:24:27.838339 env[1824]: 2024-02-12 20:24:27.763 [INFO][5260] k8s.go 585: Releasing IP address(es) ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:27.838339 env[1824]: 2024-02-12 20:24:27.763 [INFO][5260] utils.go 188: Calico CNI releasing IP address ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:27.838339 env[1824]: 2024-02-12 20:24:27.816 [INFO][5267] ipam_plugin.go 415: Releasing address using handleID ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" HandleID="k8s-pod-network.7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Workload="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:27.838339 env[1824]: 2024-02-12 20:24:27.817 [INFO][5267] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:27.838339 env[1824]: 2024-02-12 20:24:27.817 [INFO][5267] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:27.838339 env[1824]: 2024-02-12 20:24:27.830 [WARNING][5267] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" HandleID="k8s-pod-network.7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Workload="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:27.838339 env[1824]: 2024-02-12 20:24:27.830 [INFO][5267] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" HandleID="k8s-pod-network.7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Workload="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:27.838339 env[1824]: 2024-02-12 20:24:27.833 [INFO][5267] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:27.838339 env[1824]: 2024-02-12 20:24:27.835 [INFO][5260] k8s.go 591: Teardown processing complete. ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:27.839312 env[1824]: time="2024-02-12T20:24:27.838379521Z" level=info msg="TearDown network for sandbox \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\" successfully" Feb 12 20:24:27.839312 env[1824]: time="2024-02-12T20:24:27.838428528Z" level=info msg="StopPodSandbox for \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\" returns successfully" Feb 12 20:24:27.839814 env[1824]: time="2024-02-12T20:24:27.839768144Z" level=info msg="RemovePodSandbox for \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\"" Feb 12 20:24:27.840062 env[1824]: time="2024-02-12T20:24:27.839970568Z" level=info msg="Forcibly stopping sandbox \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\"" Feb 12 20:24:28.020883 env[1824]: 2024-02-12 20:24:27.928 [WARNING][5287] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0", GenerateName:"calico-kube-controllers-69855554b9-", Namespace:"calico-system", SelfLink:"", UID:"00bc6aea-403c-4dfd-a949-09ac35a4157e", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"69855554b9", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"64e0f1b85ecb475f335c9659146243c3fd09929893912ca5aedde5379f37670c", Pod:"calico-kube-controllers-69855554b9-7fn7j", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.126.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali587fd09eed7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:28.020883 env[1824]: 2024-02-12 20:24:27.928 [INFO][5287] k8s.go 578: Cleaning up netns ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:28.020883 env[1824]: 2024-02-12 20:24:27.928 [INFO][5287] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" iface="eth0" netns="" Feb 12 20:24:28.020883 env[1824]: 2024-02-12 20:24:27.928 [INFO][5287] k8s.go 585: Releasing IP address(es) ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:28.020883 env[1824]: 2024-02-12 20:24:27.928 [INFO][5287] utils.go 188: Calico CNI releasing IP address ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:28.020883 env[1824]: 2024-02-12 20:24:27.987 [INFO][5295] ipam_plugin.go 415: Releasing address using handleID ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" HandleID="k8s-pod-network.7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Workload="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:28.020883 env[1824]: 2024-02-12 20:24:27.988 [INFO][5295] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:28.020883 env[1824]: 2024-02-12 20:24:27.988 [INFO][5295] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:28.020883 env[1824]: 2024-02-12 20:24:28.010 [WARNING][5295] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" HandleID="k8s-pod-network.7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Workload="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:28.020883 env[1824]: 2024-02-12 20:24:28.010 [INFO][5295] ipam_plugin.go 443: Releasing address using workloadID ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" HandleID="k8s-pod-network.7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Workload="ip--172--31--16--195-k8s-calico--kube--controllers--69855554b9--7fn7j-eth0" Feb 12 20:24:28.020883 env[1824]: 2024-02-12 20:24:28.013 [INFO][5295] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:28.020883 env[1824]: 2024-02-12 20:24:28.016 [INFO][5287] k8s.go 591: Teardown processing complete. ContainerID="7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e" Feb 12 20:24:28.020883 env[1824]: time="2024-02-12T20:24:28.019767487Z" level=info msg="TearDown network for sandbox \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\" successfully" Feb 12 20:24:28.026768 env[1824]: time="2024-02-12T20:24:28.026618665Z" level=info msg="RemovePodSandbox \"7e513c5f36092d960fa5cafdd0be3854672ba1a80703188e68d01b974895731e\" returns successfully" Feb 12 20:24:28.027852 env[1824]: time="2024-02-12T20:24:28.027766778Z" level=info msg="StopPodSandbox for \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\"" Feb 12 20:24:28.232649 env[1824]: 2024-02-12 20:24:28.133 [WARNING][5314] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c257656-1c37-42c0-80d9-a7f2f0b7582d", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64", Pod:"csi-node-driver-tpmmd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali921d47b398b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:28.232649 env[1824]: 2024-02-12 20:24:28.134 [INFO][5314] k8s.go 578: Cleaning up netns ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:28.232649 env[1824]: 2024-02-12 20:24:28.134 [INFO][5314] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" iface="eth0" netns="" Feb 12 20:24:28.232649 env[1824]: 2024-02-12 20:24:28.134 [INFO][5314] k8s.go 585: Releasing IP address(es) ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:28.232649 env[1824]: 2024-02-12 20:24:28.134 [INFO][5314] utils.go 188: Calico CNI releasing IP address ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:28.232649 env[1824]: 2024-02-12 20:24:28.192 [INFO][5321] ipam_plugin.go 415: Releasing address using handleID ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" HandleID="k8s-pod-network.d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Workload="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:28.232649 env[1824]: 2024-02-12 20:24:28.192 [INFO][5321] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:28.232649 env[1824]: 2024-02-12 20:24:28.192 [INFO][5321] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:28.232649 env[1824]: 2024-02-12 20:24:28.215 [WARNING][5321] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" HandleID="k8s-pod-network.d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Workload="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:28.232649 env[1824]: 2024-02-12 20:24:28.215 [INFO][5321] ipam_plugin.go 443: Releasing address using workloadID ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" HandleID="k8s-pod-network.d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Workload="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:28.232649 env[1824]: 2024-02-12 20:24:28.225 [INFO][5321] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:28.232649 env[1824]: 2024-02-12 20:24:28.230 [INFO][5314] k8s.go 591: Teardown processing complete. ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:28.233621 env[1824]: time="2024-02-12T20:24:28.232737100Z" level=info msg="TearDown network for sandbox \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\" successfully" Feb 12 20:24:28.233621 env[1824]: time="2024-02-12T20:24:28.232784295Z" level=info msg="StopPodSandbox for \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\" returns successfully" Feb 12 20:24:28.233825 env[1824]: time="2024-02-12T20:24:28.233693061Z" level=info msg="RemovePodSandbox for \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\"" Feb 12 20:24:28.233920 env[1824]: time="2024-02-12T20:24:28.233854566Z" level=info msg="Forcibly stopping sandbox \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\"" Feb 12 20:24:28.444836 env[1824]: 2024-02-12 20:24:28.336 [WARNING][5342] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"8c257656-1c37-42c0-80d9-a7f2f0b7582d", ResourceVersion:"746", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7c77f88967", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"aa80b9192e97c264a62ff6d52c3414d54bbab6997c81cd3db9c744161f644a64", Pod:"csi-node-driver-tpmmd", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.126.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali921d47b398b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:28.444836 env[1824]: 2024-02-12 20:24:28.336 [INFO][5342] k8s.go 578: Cleaning up netns ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:28.444836 env[1824]: 2024-02-12 20:24:28.336 [INFO][5342] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" iface="eth0" netns="" Feb 12 20:24:28.444836 env[1824]: 2024-02-12 20:24:28.337 [INFO][5342] k8s.go 585: Releasing IP address(es) ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:28.444836 env[1824]: 2024-02-12 20:24:28.337 [INFO][5342] utils.go 188: Calico CNI releasing IP address ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:28.444836 env[1824]: 2024-02-12 20:24:28.421 [INFO][5349] ipam_plugin.go 415: Releasing address using handleID ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" HandleID="k8s-pod-network.d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Workload="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:28.444836 env[1824]: 2024-02-12 20:24:28.421 [INFO][5349] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:28.444836 env[1824]: 2024-02-12 20:24:28.421 [INFO][5349] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:28.444836 env[1824]: 2024-02-12 20:24:28.437 [WARNING][5349] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" HandleID="k8s-pod-network.d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Workload="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:28.444836 env[1824]: 2024-02-12 20:24:28.437 [INFO][5349] ipam_plugin.go 443: Releasing address using workloadID ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" HandleID="k8s-pod-network.d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Workload="ip--172--31--16--195-k8s-csi--node--driver--tpmmd-eth0" Feb 12 20:24:28.444836 env[1824]: 2024-02-12 20:24:28.440 [INFO][5349] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:28.444836 env[1824]: 2024-02-12 20:24:28.442 [INFO][5342] k8s.go 591: Teardown processing complete. ContainerID="d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a" Feb 12 20:24:28.445808 env[1824]: time="2024-02-12T20:24:28.444859663Z" level=info msg="TearDown network for sandbox \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\" successfully" Feb 12 20:24:28.456406 env[1824]: time="2024-02-12T20:24:28.455896622Z" level=info msg="RemovePodSandbox \"d0b13cfa3d34d5933646d704390ba2cf6630a0f1004ae55623c8d3182cbf253a\" returns successfully" Feb 12 20:24:28.458894 env[1824]: time="2024-02-12T20:24:28.458837799Z" level=info msg="StopPodSandbox for \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\"" Feb 12 20:24:28.725203 env[1824]: 2024-02-12 20:24:28.622 [WARNING][5371] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"465dca2b-3292-462e-bbd5-a3a7982cda7e", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4", Pod:"coredns-787d4945fb-r94bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6dab6b0f17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:28.725203 env[1824]: 2024-02-12 20:24:28.623 [INFO][5371] k8s.go 578: Cleaning up netns ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:28.725203 env[1824]: 2024-02-12 20:24:28.623 [INFO][5371] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" iface="eth0" netns="" Feb 12 20:24:28.725203 env[1824]: 2024-02-12 20:24:28.623 [INFO][5371] k8s.go 585: Releasing IP address(es) ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:28.725203 env[1824]: 2024-02-12 20:24:28.623 [INFO][5371] utils.go 188: Calico CNI releasing IP address ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:28.725203 env[1824]: 2024-02-12 20:24:28.692 [INFO][5379] ipam_plugin.go 415: Releasing address using handleID ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" HandleID="k8s-pod-network.f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:28.725203 env[1824]: 2024-02-12 20:24:28.693 [INFO][5379] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:28.725203 env[1824]: 2024-02-12 20:24:28.693 [INFO][5379] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:28.725203 env[1824]: 2024-02-12 20:24:28.713 [WARNING][5379] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" HandleID="k8s-pod-network.f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:28.725203 env[1824]: 2024-02-12 20:24:28.713 [INFO][5379] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" HandleID="k8s-pod-network.f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:28.725203 env[1824]: 2024-02-12 20:24:28.719 [INFO][5379] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:28.725203 env[1824]: 2024-02-12 20:24:28.721 [INFO][5371] k8s.go 591: Teardown processing complete. ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:28.725203 env[1824]: time="2024-02-12T20:24:28.724036100Z" level=info msg="TearDown network for sandbox \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\" successfully" Feb 12 20:24:28.725203 env[1824]: time="2024-02-12T20:24:28.724086223Z" level=info msg="StopPodSandbox for \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\" returns successfully" Feb 12 20:24:28.726992 env[1824]: time="2024-02-12T20:24:28.725636448Z" level=info msg="RemovePodSandbox for \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\"" Feb 12 20:24:28.726992 env[1824]: time="2024-02-12T20:24:28.726316042Z" level=info msg="Forcibly stopping sandbox \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\"" Feb 12 20:24:28.988348 env[1824]: 2024-02-12 20:24:28.838 [WARNING][5399] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"465dca2b-3292-462e-bbd5-a3a7982cda7e", ResourceVersion:"737", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"ddbb43b141ce51be8404770ebad6c3d433654bb6b627665cd70145c66bedaea4", Pod:"coredns-787d4945fb-r94bf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib6dab6b0f17", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:28.988348 env[1824]: 2024-02-12 20:24:28.839 [INFO][5399] k8s.go 578: Cleaning up netns ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:28.988348 env[1824]: 2024-02-12 20:24:28.839 [INFO][5399] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" iface="eth0" netns="" Feb 12 20:24:28.988348 env[1824]: 2024-02-12 20:24:28.839 [INFO][5399] k8s.go 585: Releasing IP address(es) ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:28.988348 env[1824]: 2024-02-12 20:24:28.839 [INFO][5399] utils.go 188: Calico CNI releasing IP address ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:28.988348 env[1824]: 2024-02-12 20:24:28.955 [INFO][5405] ipam_plugin.go 415: Releasing address using handleID ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" HandleID="k8s-pod-network.f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:28.988348 env[1824]: 2024-02-12 20:24:28.956 [INFO][5405] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:28.988348 env[1824]: 2024-02-12 20:24:28.956 [INFO][5405] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:28.988348 env[1824]: 2024-02-12 20:24:28.978 [WARNING][5405] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" HandleID="k8s-pod-network.f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:28.988348 env[1824]: 2024-02-12 20:24:28.978 [INFO][5405] ipam_plugin.go 443: Releasing address using workloadID ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" HandleID="k8s-pod-network.f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--r94bf-eth0" Feb 12 20:24:28.988348 env[1824]: 2024-02-12 20:24:28.982 [INFO][5405] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:28.988348 env[1824]: 2024-02-12 20:24:28.984 [INFO][5399] k8s.go 591: Teardown processing complete. ContainerID="f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7" Feb 12 20:24:28.989752 env[1824]: time="2024-02-12T20:24:28.989690560Z" level=info msg="TearDown network for sandbox \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\" successfully" Feb 12 20:24:28.996014 env[1824]: time="2024-02-12T20:24:28.995941895Z" level=info msg="RemovePodSandbox \"f425ca24c423310ce8b43bffd4ddbcfb6a2f71865481e82b3fd3e15ad6bf36e7\" returns successfully" Feb 12 20:24:28.996907 env[1824]: time="2024-02-12T20:24:28.996863488Z" level=info msg="StopPodSandbox for \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\"" Feb 12 20:24:29.219245 env[1824]: 2024-02-12 20:24:29.131 [WARNING][5423] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2cbee1c0-7ffa-4607-96e5-c1d08a403936", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea", Pod:"coredns-787d4945fb-hzrlr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5db6df91c7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:29.219245 env[1824]: 2024-02-12 20:24:29.132 [INFO][5423] k8s.go 578: Cleaning up netns ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:29.219245 env[1824]: 2024-02-12 20:24:29.134 [INFO][5423] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" iface="eth0" netns="" Feb 12 20:24:29.219245 env[1824]: 2024-02-12 20:24:29.134 [INFO][5423] k8s.go 585: Releasing IP address(es) ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:29.219245 env[1824]: 2024-02-12 20:24:29.134 [INFO][5423] utils.go 188: Calico CNI releasing IP address ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:29.219245 env[1824]: 2024-02-12 20:24:29.188 [INFO][5429] ipam_plugin.go 415: Releasing address using handleID ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" HandleID="k8s-pod-network.eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:29.219245 env[1824]: 2024-02-12 20:24:29.188 [INFO][5429] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:29.219245 env[1824]: 2024-02-12 20:24:29.189 [INFO][5429] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:29.219245 env[1824]: 2024-02-12 20:24:29.208 [WARNING][5429] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" HandleID="k8s-pod-network.eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:29.219245 env[1824]: 2024-02-12 20:24:29.208 [INFO][5429] ipam_plugin.go 443: Releasing address using workloadID ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" HandleID="k8s-pod-network.eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:29.219245 env[1824]: 2024-02-12 20:24:29.211 [INFO][5429] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:29.219245 env[1824]: 2024-02-12 20:24:29.213 [INFO][5423] k8s.go 591: Teardown processing complete. ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:29.220305 env[1824]: time="2024-02-12T20:24:29.219281636Z" level=info msg="TearDown network for sandbox \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\" successfully" Feb 12 20:24:29.220305 env[1824]: time="2024-02-12T20:24:29.219326983Z" level=info msg="StopPodSandbox for \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\" returns successfully" Feb 12 20:24:29.220820 env[1824]: time="2024-02-12T20:24:29.220774478Z" level=info msg="RemovePodSandbox for \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\"" Feb 12 20:24:29.221216 env[1824]: time="2024-02-12T20:24:29.221141563Z" level=info msg="Forcibly stopping sandbox \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\"" Feb 12 20:24:29.349431 env[1824]: 2024-02-12 20:24:29.288 [WARNING][5448] k8s.go 542: CNI_CONTAINERID does not match WorkloadEndpoint ConainerID, don't delete WEP. ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0", GenerateName:"coredns-787d4945fb-", Namespace:"kube-system", SelfLink:"", UID:"2cbee1c0-7ffa-4607-96e5-c1d08a403936", ResourceVersion:"703", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 23, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"787d4945fb", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"847db004e2e2e72b6a5394ff86102c2920ee1bf59ab4efe8527255c96630a2ea", Pod:"coredns-787d4945fb-hzrlr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.126.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5db6df91c7c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:24:29.349431 env[1824]: 2024-02-12 20:24:29.289 [INFO][5448] k8s.go 578: Cleaning up netns ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:29.349431 env[1824]: 2024-02-12 20:24:29.289 [INFO][5448] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" iface="eth0" netns="" Feb 12 20:24:29.349431 env[1824]: 2024-02-12 20:24:29.289 [INFO][5448] k8s.go 585: Releasing IP address(es) ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:29.349431 env[1824]: 2024-02-12 20:24:29.289 [INFO][5448] utils.go 188: Calico CNI releasing IP address ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:29.349431 env[1824]: 2024-02-12 20:24:29.324 [INFO][5455] ipam_plugin.go 415: Releasing address using handleID ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" HandleID="k8s-pod-network.eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:29.349431 env[1824]: 2024-02-12 20:24:29.324 [INFO][5455] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:24:29.349431 env[1824]: 2024-02-12 20:24:29.324 [INFO][5455] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:24:29.349431 env[1824]: 2024-02-12 20:24:29.339 [WARNING][5455] ipam_plugin.go 432: Asked to release address but it doesn't exist. Ignoring ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" HandleID="k8s-pod-network.eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:29.349431 env[1824]: 2024-02-12 20:24:29.339 [INFO][5455] ipam_plugin.go 443: Releasing address using workloadID ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" HandleID="k8s-pod-network.eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Workload="ip--172--31--16--195-k8s-coredns--787d4945fb--hzrlr-eth0" Feb 12 20:24:29.349431 env[1824]: 2024-02-12 20:24:29.342 [INFO][5455] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:24:29.349431 env[1824]: 2024-02-12 20:24:29.345 [INFO][5448] k8s.go 591: Teardown processing complete. ContainerID="eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f" Feb 12 20:24:29.350455 env[1824]: time="2024-02-12T20:24:29.349682353Z" level=info msg="TearDown network for sandbox \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\" successfully" Feb 12 20:24:29.355330 env[1824]: time="2024-02-12T20:24:29.355226232Z" level=info msg="RemovePodSandbox \"eb3448b65e319c2d33222ea1b9f1bc4690ba429af84631ee2341436f2d87ff7f\" returns successfully" Feb 12 20:24:33.761358 systemd[1]: Started sshd@7-172.31.16.195:22-147.75.109.163:36410.service. Feb 12 20:24:33.760000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.16.195:22-147.75.109.163:36410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:33.764491 kernel: kauditd_printk_skb: 5 callbacks suppressed Feb 12 20:24:33.764657 kernel: audit: type=1130 audit(1707769473.760:313): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.16.195:22-147.75.109.163:36410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:33.952000 audit[5474]: USER_ACCT pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:33.954614 sshd[5474]: Accepted publickey for core from 147.75.109.163 port 36410 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:33.966623 kernel: audit: type=1101 audit(1707769473.952:314): pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:33.964000 audit[5474]: CRED_ACQ pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:33.967896 sshd[5474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:33.978815 systemd[1]: Started session-8.scope. Feb 12 20:24:33.982227 kernel: audit: type=1103 audit(1707769473.964:315): pid=5474 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:33.982398 kernel: audit: type=1006 audit(1707769473.964:316): pid=5474 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Feb 12 20:24:33.964000 audit[5474]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe98835b0 a2=3 a3=1 items=0 ppid=1 pid=5474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:33.982623 systemd-logind[1800]: New session 8 of user core. Feb 12 20:24:33.993896 kernel: audit: type=1300 audit(1707769473.964:316): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe98835b0 a2=3 a3=1 items=0 ppid=1 pid=5474 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:33.964000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:24:33.994000 audit[5474]: USER_START pid=5474 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:34.013724 kernel: audit: type=1327 audit(1707769473.964:316): proctitle=737368643A20636F7265205B707269765D Feb 12 20:24:34.013902 kernel: audit: type=1105 audit(1707769473.994:317): pid=5474 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:34.000000 audit[5477]: CRED_ACQ pid=5477 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:34.026694 kernel: audit: type=1103 audit(1707769474.000:318): pid=5477 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:34.288163 sshd[5474]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:34.289000 audit[5474]: USER_END pid=5474 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:34.302816 systemd[1]: sshd@7-172.31.16.195:22-147.75.109.163:36410.service: Deactivated successfully. Feb 12 20:24:34.291000 audit[5474]: CRED_DISP pid=5474 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:34.313521 kernel: audit: type=1106 audit(1707769474.289:319): pid=5474 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:34.313687 kernel: audit: type=1104 audit(1707769474.291:320): pid=5474 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:34.305435 systemd[1]: session-8.scope: Deactivated successfully. Feb 12 20:24:34.306419 systemd-logind[1800]: Session 8 logged out. Waiting for processes to exit. Feb 12 20:24:34.308370 systemd-logind[1800]: Removed session 8. Feb 12 20:24:34.301000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.16.195:22-147.75.109.163:36410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:34.919355 systemd[1]: run-containerd-runc-k8s.io-f75fe05dfa83a08f915c792e63d3e189bf2ff83990a680a9c14bf480a17197d9-runc.v7eJ1O.mount: Deactivated successfully. Feb 12 20:24:39.326793 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 20:24:39.326945 kernel: audit: type=1130 audit(1707769479.314:322): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.16.195:22-147.75.109.163:42266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:39.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.16.195:22-147.75.109.163:42266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:39.315891 systemd[1]: Started sshd@8-172.31.16.195:22-147.75.109.163:42266.service. Feb 12 20:24:39.491000 audit[5507]: USER_ACCT pid=5507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:39.496376 sshd[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:39.497519 sshd[5507]: Accepted publickey for core from 147.75.109.163 port 42266 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:39.494000 audit[5507]: CRED_ACQ pid=5507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:39.512986 kernel: audit: type=1101 audit(1707769479.491:323): pid=5507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:39.513114 kernel: audit: type=1103 audit(1707769479.494:324): pid=5507 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:39.513195 kernel: audit: type=1006 audit(1707769479.494:325): pid=5507 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Feb 12 20:24:39.494000 audit[5507]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff92abbc0 a2=3 a3=1 items=0 ppid=1 pid=5507 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:39.529447 kernel: audit: type=1300 audit(1707769479.494:325): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff92abbc0 a2=3 a3=1 items=0 ppid=1 pid=5507 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:39.494000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:24:39.534209 kernel: audit: type=1327 audit(1707769479.494:325): proctitle=737368643A20636F7265205B707269765D Feb 12 20:24:39.538297 systemd-logind[1800]: New session 9 of user core. Feb 12 20:24:39.539261 systemd[1]: Started session-9.scope. Feb 12 20:24:39.549000 audit[5507]: USER_START pid=5507 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:39.552000 audit[5510]: CRED_ACQ pid=5510 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:39.571831 kernel: audit: type=1105 audit(1707769479.549:326): pid=5507 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:39.571982 kernel: audit: type=1103 audit(1707769479.552:327): pid=5510 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:39.788904 sshd[5507]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:39.789000 audit[5507]: USER_END pid=5507 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:39.794746 systemd[1]: sshd@8-172.31.16.195:22-147.75.109.163:42266.service: Deactivated successfully. Feb 12 20:24:39.796783 systemd[1]: session-9.scope: Deactivated successfully. Feb 12 20:24:39.790000 audit[5507]: CRED_DISP pid=5507 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:39.811925 kernel: audit: type=1106 audit(1707769479.789:328): pid=5507 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:39.812070 kernel: audit: type=1104 audit(1707769479.790:329): pid=5507 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:39.812063 systemd-logind[1800]: Session 9 logged out. Waiting for processes to exit. Feb 12 20:24:39.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.16.195:22-147.75.109.163:42266 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:39.814685 systemd-logind[1800]: Removed session 9. Feb 12 20:24:44.815070 systemd[1]: Started sshd@9-172.31.16.195:22-147.75.109.163:54104.service. Feb 12 20:24:44.827440 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 20:24:44.827574 kernel: audit: type=1130 audit(1707769484.814:331): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.16.195:22-147.75.109.163:54104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.16.195:22-147.75.109.163:54104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:44.993000 audit[5532]: USER_ACCT pid=5532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:44.995287 sshd[5532]: Accepted publickey for core from 147.75.109.163 port 54104 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:45.007640 kernel: audit: type=1101 audit(1707769484.993:332): pid=5532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:45.007773 kernel: audit: type=1103 audit(1707769485.005:333): pid=5532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:45.005000 audit[5532]: CRED_ACQ pid=5532 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:45.008182 sshd[5532]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:45.023476 kernel: audit: type=1006 audit(1707769485.005:334): pid=5532 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Feb 12 20:24:45.005000 audit[5532]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc27831b0 a2=3 a3=1 items=0 ppid=1 pid=5532 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:45.034062 kernel: audit: type=1300 audit(1707769485.005:334): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc27831b0 a2=3 a3=1 items=0 ppid=1 pid=5532 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:45.005000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:24:45.038565 kernel: audit: type=1327 audit(1707769485.005:334): proctitle=737368643A20636F7265205B707269765D Feb 12 20:24:45.045308 systemd-logind[1800]: New session 10 of user core. Feb 12 20:24:45.047086 systemd[1]: Started session-10.scope. Feb 12 20:24:45.057000 audit[5532]: USER_START pid=5532 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:45.071810 kernel: audit: type=1105 audit(1707769485.057:335): pid=5532 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:45.071966 kernel: audit: type=1103 audit(1707769485.070:336): pid=5535 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:45.070000 audit[5535]: CRED_ACQ pid=5535 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:45.312773 sshd[5532]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:45.313000 audit[5532]: USER_END pid=5532 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:45.327706 systemd[1]: sshd@9-172.31.16.195:22-147.75.109.163:54104.service: Deactivated successfully. Feb 12 20:24:45.330784 systemd-logind[1800]: Session 10 logged out. Waiting for processes to exit. Feb 12 20:24:45.318000 audit[5532]: CRED_DISP pid=5532 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:45.340942 kernel: audit: type=1106 audit(1707769485.313:337): pid=5532 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:45.341056 kernel: audit: type=1104 audit(1707769485.318:338): pid=5532 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:45.332789 systemd[1]: session-10.scope: Deactivated successfully. Feb 12 20:24:45.334728 systemd-logind[1800]: Removed session 10. Feb 12 20:24:45.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.16.195:22-147.75.109.163:54104 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:49.161890 amazon-ssm-agent[1782]: 2024-02-12 20:24:49 INFO [HealthCheck] HealthCheck reporting agent health. Feb 12 20:24:50.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.16.195:22-147.75.109.163:54108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:50.337940 systemd[1]: Started sshd@10-172.31.16.195:22-147.75.109.163:54108.service. Feb 12 20:24:50.344691 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 20:24:50.344781 kernel: audit: type=1130 audit(1707769490.336:340): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.16.195:22-147.75.109.163:54108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:50.512000 audit[5568]: USER_ACCT pid=5568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:50.515620 sshd[5568]: Accepted publickey for core from 147.75.109.163 port 54108 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:50.526239 sshd[5568]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:50.523000 audit[5568]: CRED_ACQ pid=5568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:50.539054 kernel: audit: type=1101 audit(1707769490.512:341): pid=5568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:50.539153 kernel: audit: type=1103 audit(1707769490.523:342): pid=5568 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:50.545574 kernel: audit: type=1006 audit(1707769490.524:343): pid=5568 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Feb 12 20:24:50.524000 audit[5568]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3152e40 a2=3 a3=1 items=0 ppid=1 pid=5568 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:50.556147 kernel: audit: type=1300 audit(1707769490.524:343): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe3152e40 a2=3 a3=1 items=0 ppid=1 pid=5568 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:50.561733 kernel: audit: type=1327 audit(1707769490.524:343): proctitle=737368643A20636F7265205B707269765D Feb 12 20:24:50.524000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:24:50.561028 systemd-logind[1800]: New session 11 of user core. Feb 12 20:24:50.562917 systemd[1]: Started session-11.scope. Feb 12 20:24:50.573000 audit[5568]: USER_START pid=5568 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:50.587635 kernel: audit: type=1105 audit(1707769490.573:344): pid=5568 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:50.586000 audit[5571]: CRED_ACQ pid=5571 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:50.605594 kernel: audit: type=1103 audit(1707769490.586:345): pid=5571 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:50.829049 sshd[5568]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:50.829000 audit[5568]: USER_END pid=5568 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:50.834503 systemd[1]: sshd@10-172.31.16.195:22-147.75.109.163:54108.service: Deactivated successfully. Feb 12 20:24:50.836204 systemd[1]: session-11.scope: Deactivated successfully. Feb 12 20:24:50.843969 systemd-logind[1800]: Session 11 logged out. Waiting for processes to exit. Feb 12 20:24:50.830000 audit[5568]: CRED_DISP pid=5568 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:50.854088 kernel: audit: type=1106 audit(1707769490.829:346): pid=5568 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:50.854201 kernel: audit: type=1104 audit(1707769490.830:347): pid=5568 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:50.855245 systemd-logind[1800]: Removed session 11. Feb 12 20:24:50.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.16.195:22-147.75.109.163:54108 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:55.854400 systemd[1]: Started sshd@11-172.31.16.195:22-147.75.109.163:35874.service. Feb 12 20:24:55.858581 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 20:24:55.858703 kernel: audit: type=1130 audit(1707769495.855:349): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.16.195:22-147.75.109.163:35874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:55.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.16.195:22-147.75.109.163:35874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:56.022000 audit[5583]: USER_ACCT pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.023762 sshd[5583]: Accepted publickey for core from 147.75.109.163 port 35874 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:56.026932 sshd[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:56.025000 audit[5583]: CRED_ACQ pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.044000 kernel: audit: type=1101 audit(1707769496.022:350): pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.044154 kernel: audit: type=1103 audit(1707769496.025:351): pid=5583 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.045700 kernel: audit: type=1006 audit(1707769496.025:352): pid=5583 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Feb 12 20:24:56.051705 systemd-logind[1800]: New session 12 of user core. Feb 12 20:24:56.052794 systemd[1]: Started session-12.scope. Feb 12 20:24:56.025000 audit[5583]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffd909d50 a2=3 a3=1 items=0 ppid=1 pid=5583 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:56.071069 kernel: audit: type=1300 audit(1707769496.025:352): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffd909d50 a2=3 a3=1 items=0 ppid=1 pid=5583 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:56.025000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:24:56.075589 kernel: audit: type=1327 audit(1707769496.025:352): proctitle=737368643A20636F7265205B707269765D Feb 12 20:24:56.083000 audit[5583]: USER_START pid=5583 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.104253 kernel: audit: type=1105 audit(1707769496.083:353): pid=5583 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.109689 kernel: audit: type=1103 audit(1707769496.099:354): pid=5586 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.099000 audit[5586]: CRED_ACQ pid=5586 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.399007 sshd[5583]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:56.400000 audit[5583]: USER_END pid=5583 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.413791 systemd[1]: sshd@11-172.31.16.195:22-147.75.109.163:35874.service: Deactivated successfully. Feb 12 20:24:56.416735 systemd[1]: session-12.scope: Deactivated successfully. Feb 12 20:24:56.418275 systemd-logind[1800]: Session 12 logged out. Waiting for processes to exit. Feb 12 20:24:56.426029 systemd-logind[1800]: Removed session 12. Feb 12 20:24:56.430471 systemd[1]: Started sshd@12-172.31.16.195:22-147.75.109.163:35880.service. Feb 12 20:24:56.400000 audit[5583]: CRED_DISP pid=5583 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.449597 kernel: audit: type=1106 audit(1707769496.400:355): pid=5583 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.449720 kernel: audit: type=1104 audit(1707769496.400:356): pid=5583 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.16.195:22-147.75.109.163:35874 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:56.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.16.195:22-147.75.109.163:35880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:56.621812 sshd[5596]: Accepted publickey for core from 147.75.109.163 port 35880 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:56.621000 audit[5596]: USER_ACCT pid=5596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.626000 audit[5596]: CRED_ACQ pid=5596 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.626000 audit[5596]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff6c87f00 a2=3 a3=1 items=0 ppid=1 pid=5596 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:56.626000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:24:56.627864 sshd[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:56.636844 systemd-logind[1800]: New session 13 of user core. Feb 12 20:24:56.638794 systemd[1]: Started session-13.scope. Feb 12 20:24:56.650000 audit[5596]: USER_START pid=5596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:56.655000 audit[5599]: CRED_ACQ pid=5599 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:59.028928 sshd[5596]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:59.031000 audit[5596]: USER_END pid=5596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:59.031000 audit[5596]: CRED_DISP pid=5596 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:59.035338 systemd[1]: sshd@12-172.31.16.195:22-147.75.109.163:35880.service: Deactivated successfully. Feb 12 20:24:59.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.16.195:22-147.75.109.163:35880 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.039089 systemd[1]: session-13.scope: Deactivated successfully. Feb 12 20:24:59.039150 systemd-logind[1800]: Session 13 logged out. Waiting for processes to exit. Feb 12 20:24:59.043187 systemd-logind[1800]: Removed session 13. Feb 12 20:24:59.070322 systemd[1]: Started sshd@13-172.31.16.195:22-147.75.109.163:35892.service. Feb 12 20:24:59.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.16.195:22-147.75.109.163:35892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.270000 audit[5607]: USER_ACCT pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:59.271065 sshd[5607]: Accepted publickey for core from 147.75.109.163 port 35892 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:24:59.272000 audit[5607]: CRED_ACQ pid=5607 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:59.272000 audit[5607]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc24c3e20 a2=3 a3=1 items=0 ppid=1 pid=5607 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:24:59.272000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:24:59.274791 sshd[5607]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:24:59.283809 systemd-logind[1800]: New session 14 of user core. Feb 12 20:24:59.284943 systemd[1]: Started session-14.scope. Feb 12 20:24:59.295000 audit[5607]: USER_START pid=5607 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:59.298000 audit[5610]: CRED_ACQ pid=5610 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:59.545876 sshd[5607]: pam_unix(sshd:session): session closed for user core Feb 12 20:24:59.547000 audit[5607]: USER_END pid=5607 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:59.547000 audit[5607]: CRED_DISP pid=5607 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:24:59.551845 systemd-logind[1800]: Session 14 logged out. Waiting for processes to exit. Feb 12 20:24:59.552440 systemd[1]: sshd@13-172.31.16.195:22-147.75.109.163:35892.service: Deactivated successfully. Feb 12 20:24:59.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.16.195:22-147.75.109.163:35892 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:24:59.554427 systemd[1]: session-14.scope: Deactivated successfully. Feb 12 20:24:59.556322 systemd-logind[1800]: Removed session 14. Feb 12 20:25:04.584495 kernel: kauditd_printk_skb: 23 callbacks suppressed Feb 12 20:25:04.584689 kernel: audit: type=1130 audit(1707769504.572:376): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.16.195:22-147.75.109.163:46310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:04.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.16.195:22-147.75.109.163:46310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:04.572932 systemd[1]: Started sshd@14-172.31.16.195:22-147.75.109.163:46310.service. Feb 12 20:25:04.741000 audit[5632]: USER_ACCT pid=5632 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:04.744202 sshd[5632]: Accepted publickey for core from 147.75.109.163 port 46310 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:04.745898 sshd[5632]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:04.741000 audit[5632]: CRED_ACQ pid=5632 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:04.761645 kernel: audit: type=1101 audit(1707769504.741:377): pid=5632 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:04.761796 kernel: audit: type=1103 audit(1707769504.741:378): pid=5632 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:04.768124 kernel: audit: type=1006 audit(1707769504.741:379): pid=5632 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Feb 12 20:25:04.741000 audit[5632]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1a7b8a0 a2=3 a3=1 items=0 ppid=1 pid=5632 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:04.774585 systemd[1]: Started session-15.scope. Feb 12 20:25:04.775458 systemd-logind[1800]: New session 15 of user core. Feb 12 20:25:04.779595 kernel: audit: type=1300 audit(1707769504.741:379): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1a7b8a0 a2=3 a3=1 items=0 ppid=1 pid=5632 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:04.741000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:04.791042 kernel: audit: type=1327 audit(1707769504.741:379): proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:04.791143 kernel: audit: type=1105 audit(1707769504.785:380): pid=5632 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:04.785000 audit[5632]: USER_START pid=5632 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:04.803856 kernel: audit: type=1103 audit(1707769504.802:381): pid=5636 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:04.802000 audit[5636]: CRED_ACQ pid=5636 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:05.093861 sshd[5632]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:05.095000 audit[5632]: USER_END pid=5632 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:05.099359 systemd-logind[1800]: Session 15 logged out. Waiting for processes to exit. Feb 12 20:25:05.102105 systemd[1]: sshd@14-172.31.16.195:22-147.75.109.163:46310.service: Deactivated successfully. Feb 12 20:25:05.103647 systemd[1]: session-15.scope: Deactivated successfully. Feb 12 20:25:05.106241 systemd-logind[1800]: Removed session 15. Feb 12 20:25:05.095000 audit[5632]: CRED_DISP pid=5632 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:05.118214 kernel: audit: type=1106 audit(1707769505.095:382): pid=5632 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:05.118357 kernel: audit: type=1104 audit(1707769505.095:383): pid=5632 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:05.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.16.195:22-147.75.109.163:46310 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:10.119344 systemd[1]: Started sshd@15-172.31.16.195:22-147.75.109.163:46316.service. Feb 12 20:25:10.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.16.195:22-147.75.109.163:46316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:10.125599 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 20:25:10.125736 kernel: audit: type=1130 audit(1707769510.120:385): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.16.195:22-147.75.109.163:46316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:10.290000 audit[5665]: USER_ACCT pid=5665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:10.293331 sshd[5665]: Accepted publickey for core from 147.75.109.163 port 46316 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:10.295166 sshd[5665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:10.293000 audit[5665]: CRED_ACQ pid=5665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:10.309420 systemd[1]: Started session-16.scope. Feb 12 20:25:10.312938 kernel: audit: type=1101 audit(1707769510.290:386): pid=5665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:10.313070 kernel: audit: type=1103 audit(1707769510.293:387): pid=5665 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:10.313124 kernel: audit: type=1006 audit(1707769510.293:388): pid=5665 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Feb 12 20:25:10.311305 systemd-logind[1800]: New session 16 of user core. Feb 12 20:25:10.293000 audit[5665]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc819910 a2=3 a3=1 items=0 ppid=1 pid=5665 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:10.330074 kernel: audit: type=1300 audit(1707769510.293:388): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc819910 a2=3 a3=1 items=0 ppid=1 pid=5665 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:10.330189 kernel: audit: type=1327 audit(1707769510.293:388): proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:10.293000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:10.333000 audit[5665]: USER_START pid=5665 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:10.336000 audit[5668]: CRED_ACQ pid=5668 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:10.354737 kernel: audit: type=1105 audit(1707769510.333:389): pid=5665 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:10.354922 kernel: audit: type=1103 audit(1707769510.336:390): pid=5668 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:10.571901 sshd[5665]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:10.573000 audit[5665]: USER_END pid=5665 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:10.578149 systemd[1]: sshd@15-172.31.16.195:22-147.75.109.163:46316.service: Deactivated successfully. Feb 12 20:25:10.579700 systemd[1]: session-16.scope: Deactivated successfully. Feb 12 20:25:10.587739 systemd-logind[1800]: Session 16 logged out. Waiting for processes to exit. Feb 12 20:25:10.575000 audit[5665]: CRED_DISP pid=5665 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:10.588609 kernel: audit: type=1106 audit(1707769510.573:391): pid=5665 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:10.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.16.195:22-147.75.109.163:46316 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:10.598592 kernel: audit: type=1104 audit(1707769510.575:392): pid=5665 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:10.598873 systemd-logind[1800]: Removed session 16. Feb 12 20:25:15.597396 systemd[1]: Started sshd@16-172.31.16.195:22-147.75.109.163:54528.service. Feb 12 20:25:15.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.16.195:22-147.75.109.163:54528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:15.604316 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 20:25:15.604424 kernel: audit: type=1130 audit(1707769515.597:394): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.16.195:22-147.75.109.163:54528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:15.769000 audit[5681]: USER_ACCT pid=5681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:15.770645 sshd[5681]: Accepted publickey for core from 147.75.109.163 port 54528 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:15.782600 kernel: audit: type=1101 audit(1707769515.769:395): pid=5681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:15.782739 kernel: audit: type=1103 audit(1707769515.781:396): pid=5681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:15.781000 audit[5681]: CRED_ACQ pid=5681 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:15.784159 sshd[5681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:15.798460 kernel: audit: type=1006 audit(1707769515.782:397): pid=5681 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=17 res=1 Feb 12 20:25:15.782000 audit[5681]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff8efc680 a2=3 a3=1 items=0 ppid=1 pid=5681 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:15.809263 kernel: audit: type=1300 audit(1707769515.782:397): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff8efc680 a2=3 a3=1 items=0 ppid=1 pid=5681 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:15.782000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:15.815468 kernel: audit: type=1327 audit(1707769515.782:397): proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:15.814991 systemd[1]: Started session-17.scope. Feb 12 20:25:15.815805 systemd-logind[1800]: New session 17 of user core. Feb 12 20:25:15.827000 audit[5681]: USER_START pid=5681 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:15.830000 audit[5684]: CRED_ACQ pid=5684 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:15.849253 kernel: audit: type=1105 audit(1707769515.827:398): pid=5681 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:15.849394 kernel: audit: type=1103 audit(1707769515.830:399): pid=5684 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:16.092296 sshd[5681]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:16.093000 audit[5681]: USER_END pid=5681 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:16.099000 audit[5681]: CRED_DISP pid=5681 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:16.106316 systemd[1]: sshd@16-172.31.16.195:22-147.75.109.163:54528.service: Deactivated successfully. Feb 12 20:25:16.109239 systemd[1]: session-17.scope: Deactivated successfully. Feb 12 20:25:16.109570 systemd-logind[1800]: Session 17 logged out. Waiting for processes to exit. Feb 12 20:25:16.113089 systemd-logind[1800]: Removed session 17. Feb 12 20:25:16.115072 kernel: audit: type=1106 audit(1707769516.093:400): pid=5681 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:16.115243 kernel: audit: type=1104 audit(1707769516.099:401): pid=5681 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:16.105000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.16.195:22-147.75.109.163:54528 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:16.499312 systemd[1]: run-containerd-runc-k8s.io-af901d6703a0a98cfe9ca7d379bb6b0db9c836db509fea0f0b516342eee99ff1-runc.yO0GQS.mount: Deactivated successfully. Feb 12 20:25:21.118097 systemd[1]: Started sshd@17-172.31.16.195:22-147.75.109.163:54538.service. Feb 12 20:25:21.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.16.195:22-147.75.109.163:54538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.122572 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 20:25:21.122679 kernel: audit: type=1130 audit(1707769521.118:403): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.16.195:22-147.75.109.163:54538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:21.299151 sshd[5716]: Accepted publickey for core from 147.75.109.163 port 54538 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:21.298000 audit[5716]: USER_ACCT pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:21.305793 sshd[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:21.302000 audit[5716]: CRED_ACQ pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:21.321093 kernel: audit: type=1101 audit(1707769521.298:404): pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:21.321176 kernel: audit: type=1103 audit(1707769521.302:405): pid=5716 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:21.327213 kernel: audit: type=1006 audit(1707769521.302:406): pid=5716 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=18 res=1 Feb 12 20:25:21.302000 audit[5716]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff72d4440 a2=3 a3=1 items=0 ppid=1 pid=5716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:21.337943 kernel: audit: type=1300 audit(1707769521.302:406): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff72d4440 a2=3 a3=1 items=0 ppid=1 pid=5716 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:21.302000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:21.342174 kernel: audit: type=1327 audit(1707769521.302:406): proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:21.346657 systemd-logind[1800]: New session 18 of user core. Feb 12 20:25:21.350491 systemd[1]: Started session-18.scope. Feb 12 20:25:21.368000 audit[5716]: USER_START pid=5716 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:21.381000 audit[5719]: CRED_ACQ pid=5719 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:21.392555 kernel: audit: type=1105 audit(1707769521.368:407): pid=5716 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:21.392721 kernel: audit: type=1103 audit(1707769521.381:408): pid=5719 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:21.630877 sshd[5716]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:21.633000 audit[5716]: USER_END pid=5716 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:21.637294 systemd-logind[1800]: Session 18 logged out. Waiting for processes to exit. Feb 12 20:25:21.641836 systemd[1]: sshd@17-172.31.16.195:22-147.75.109.163:54538.service: Deactivated successfully. Feb 12 20:25:21.643998 systemd[1]: session-18.scope: Deactivated successfully. Feb 12 20:25:21.633000 audit[5716]: CRED_DISP pid=5716 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:21.648212 systemd-logind[1800]: Removed session 18. Feb 12 20:25:21.656284 kernel: audit: type=1106 audit(1707769521.633:409): pid=5716 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:21.656450 kernel: audit: type=1104 audit(1707769521.633:410): pid=5716 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:21.639000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.16.195:22-147.75.109.163:54538 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:26.660470 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 20:25:26.660632 kernel: audit: type=1130 audit(1707769526.657:412): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.16.195:22-147.75.109.163:40540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:26.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.16.195:22-147.75.109.163:40540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:26.658038 systemd[1]: Started sshd@18-172.31.16.195:22-147.75.109.163:40540.service. Feb 12 20:25:26.840000 audit[5730]: USER_ACCT pid=5730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:26.843687 sshd[5730]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:26.851288 sshd[5730]: Accepted publickey for core from 147.75.109.163 port 40540 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:26.840000 audit[5730]: CRED_ACQ pid=5730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:26.861412 kernel: audit: type=1101 audit(1707769526.840:413): pid=5730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:26.861607 kernel: audit: type=1103 audit(1707769526.840:414): pid=5730 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:26.868115 kernel: audit: type=1006 audit(1707769526.840:415): pid=5730 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=19 res=1 Feb 12 20:25:26.840000 audit[5730]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe27326a0 a2=3 a3=1 items=0 ppid=1 pid=5730 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:26.878930 kernel: audit: type=1300 audit(1707769526.840:415): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe27326a0 a2=3 a3=1 items=0 ppid=1 pid=5730 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:26.881717 kernel: audit: type=1327 audit(1707769526.840:415): proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:26.840000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:26.882950 systemd-logind[1800]: New session 19 of user core. Feb 12 20:25:26.885331 systemd[1]: Started session-19.scope. Feb 12 20:25:26.905000 audit[5730]: USER_START pid=5730 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:26.920604 kernel: audit: type=1105 audit(1707769526.905:416): pid=5730 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:26.921000 audit[5733]: CRED_ACQ pid=5733 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:26.933662 kernel: audit: type=1103 audit(1707769526.921:417): pid=5733 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:27.163900 sshd[5730]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:27.165000 audit[5730]: USER_END pid=5730 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:27.169926 systemd-logind[1800]: Session 19 logged out. Waiting for processes to exit. Feb 12 20:25:27.174140 systemd[1]: sshd@18-172.31.16.195:22-147.75.109.163:40540.service: Deactivated successfully. Feb 12 20:25:27.175848 systemd[1]: session-19.scope: Deactivated successfully. Feb 12 20:25:27.179417 systemd-logind[1800]: Removed session 19. Feb 12 20:25:27.166000 audit[5730]: CRED_DISP pid=5730 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:27.190370 kernel: audit: type=1106 audit(1707769527.165:418): pid=5730 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:27.190499 kernel: audit: type=1104 audit(1707769527.166:419): pid=5730 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:27.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.16.195:22-147.75.109.163:40540 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.197223 systemd[1]: Started sshd@19-172.31.16.195:22-147.75.109.163:40554.service. Feb 12 20:25:27.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.16.195:22-147.75.109.163:40554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.376000 audit[5743]: USER_ACCT pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:27.378959 sshd[5743]: Accepted publickey for core from 147.75.109.163 port 40554 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:27.378000 audit[5743]: CRED_ACQ pid=5743 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:27.378000 audit[5743]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc8fbe910 a2=3 a3=1 items=0 ppid=1 pid=5743 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:27.378000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:27.380071 sshd[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:27.389462 systemd[1]: Started session-20.scope. Feb 12 20:25:27.390975 systemd-logind[1800]: New session 20 of user core. Feb 12 20:25:27.411000 audit[5743]: USER_START pid=5743 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:27.418000 audit[5746]: CRED_ACQ pid=5746 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:27.947557 sshd[5743]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:27.948000 audit[5743]: USER_END pid=5743 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:27.949000 audit[5743]: CRED_DISP pid=5743 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:27.953120 systemd-logind[1800]: Session 20 logged out. Waiting for processes to exit. Feb 12 20:25:27.953560 systemd[1]: sshd@19-172.31.16.195:22-147.75.109.163:40554.service: Deactivated successfully. Feb 12 20:25:27.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.16.195:22-147.75.109.163:40554 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:27.956604 systemd[1]: session-20.scope: Deactivated successfully. Feb 12 20:25:27.957729 systemd-logind[1800]: Removed session 20. Feb 12 20:25:27.973786 systemd[1]: Started sshd@20-172.31.16.195:22-147.75.109.163:40568.service. Feb 12 20:25:27.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.16.195:22-147.75.109.163:40568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:28.177000 audit[5755]: USER_ACCT pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:28.178476 sshd[5755]: Accepted publickey for core from 147.75.109.163 port 40568 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:28.179000 audit[5755]: CRED_ACQ pid=5755 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:28.179000 audit[5755]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff364f90 a2=3 a3=1 items=0 ppid=1 pid=5755 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:28.179000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:28.181068 sshd[5755]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:28.190286 systemd-logind[1800]: New session 21 of user core. Feb 12 20:25:28.191507 systemd[1]: Started session-21.scope. Feb 12 20:25:28.202000 audit[5755]: USER_START pid=5755 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:28.206000 audit[5759]: CRED_ACQ pid=5759 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:29.853813 sshd[5755]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:29.855000 audit[5755]: USER_END pid=5755 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:29.856000 audit[5755]: CRED_DISP pid=5755 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:29.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.16.195:22-147.75.109.163:40568 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:29.860225 systemd[1]: sshd@20-172.31.16.195:22-147.75.109.163:40568.service: Deactivated successfully. Feb 12 20:25:29.863275 systemd-logind[1800]: Session 21 logged out. Waiting for processes to exit. Feb 12 20:25:29.865967 systemd[1]: session-21.scope: Deactivated successfully. Feb 12 20:25:29.868802 systemd-logind[1800]: Removed session 21. Feb 12 20:25:29.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.16.195:22-147.75.109.163:40576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:29.878769 systemd[1]: Started sshd@21-172.31.16.195:22-147.75.109.163:40576.service. Feb 12 20:25:29.991000 audit[5817]: NETFILTER_CFG table=filter:125 family=2 entries=6 op=nft_register_rule pid=5817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:29.991000 audit[5817]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffdf6bf8c0 a2=0 a3=ffffb96cb6c0 items=0 ppid=3263 pid=5817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:29.991000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:29.997000 audit[5817]: NETFILTER_CFG table=nat:126 family=2 entries=78 op=nft_register_rule pid=5817 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:29.997000 audit[5817]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffdf6bf8c0 a2=0 a3=ffffb96cb6c0 items=0 ppid=3263 pid=5817 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:29.997000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:30.069000 audit[5796]: USER_ACCT pid=5796 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:30.070455 sshd[5796]: Accepted publickey for core from 147.75.109.163 port 40576 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:30.072000 audit[5796]: CRED_ACQ pid=5796 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:30.072000 audit[5796]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe8bb6fe0 a2=3 a3=1 items=0 ppid=1 pid=5796 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:30.072000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:30.073901 sshd[5796]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:30.083798 systemd[1]: Started session-22.scope. Feb 12 20:25:30.085717 systemd-logind[1800]: New session 22 of user core. Feb 12 20:25:30.095000 audit[5796]: USER_START pid=5796 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:30.098000 audit[5844]: CRED_ACQ pid=5844 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:30.109000 audit[5845]: NETFILTER_CFG table=filter:127 family=2 entries=18 op=nft_register_rule pid=5845 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:30.109000 audit[5845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=fffff33e5110 a2=0 a3=ffffb8ece6c0 items=0 ppid=3263 pid=5845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:30.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:30.113000 audit[5845]: NETFILTER_CFG table=nat:128 family=2 entries=78 op=nft_register_rule pid=5845 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:30.113000 audit[5845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=fffff33e5110 a2=0 a3=ffffb8ece6c0 items=0 ppid=3263 pid=5845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:30.113000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:30.614366 sshd[5796]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:30.617000 audit[5796]: USER_END pid=5796 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:30.617000 audit[5796]: CRED_DISP pid=5796 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:30.621634 systemd-logind[1800]: Session 22 logged out. Waiting for processes to exit. Feb 12 20:25:30.623132 systemd[1]: sshd@21-172.31.16.195:22-147.75.109.163:40576.service: Deactivated successfully. Feb 12 20:25:30.623000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.16.195:22-147.75.109.163:40576 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:30.625487 systemd[1]: session-22.scope: Deactivated successfully. Feb 12 20:25:30.628231 systemd-logind[1800]: Removed session 22. Feb 12 20:25:30.640212 systemd[1]: Started sshd@22-172.31.16.195:22-147.75.109.163:40578.service. Feb 12 20:25:30.641000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.16.195:22-147.75.109.163:40578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:30.814000 audit[5853]: USER_ACCT pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:30.815460 sshd[5853]: Accepted publickey for core from 147.75.109.163 port 40578 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:30.817000 audit[5853]: CRED_ACQ pid=5853 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:30.817000 audit[5853]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffed3da110 a2=3 a3=1 items=0 ppid=1 pid=5853 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:30.817000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:30.818823 sshd[5853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:30.827821 systemd-logind[1800]: New session 23 of user core. Feb 12 20:25:30.828150 systemd[1]: Started session-23.scope. Feb 12 20:25:30.838000 audit[5853]: USER_START pid=5853 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:30.841000 audit[5856]: CRED_ACQ pid=5856 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:31.083898 sshd[5853]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:31.085000 audit[5853]: USER_END pid=5853 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:31.086000 audit[5853]: CRED_DISP pid=5853 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:31.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.16.195:22-147.75.109.163:40578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:31.089348 systemd-logind[1800]: Session 23 logged out. Waiting for processes to exit. Feb 12 20:25:31.089962 systemd[1]: sshd@22-172.31.16.195:22-147.75.109.163:40578.service: Deactivated successfully. Feb 12 20:25:31.092875 systemd[1]: session-23.scope: Deactivated successfully. Feb 12 20:25:31.094829 systemd-logind[1800]: Removed session 23. Feb 12 20:25:34.392672 kubelet[3105]: I0212 20:25:34.392606 3105 topology_manager.go:210] "Topology Admit Handler" Feb 12 20:25:34.534000 audit[5892]: NETFILTER_CFG table=filter:129 family=2 entries=30 op=nft_register_rule pid=5892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:34.538270 kernel: kauditd_printk_skb: 57 callbacks suppressed Feb 12 20:25:34.538407 kernel: audit: type=1325 audit(1707769534.534:461): table=filter:129 family=2 entries=30 op=nft_register_rule pid=5892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:34.534000 audit[5892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffd197ab40 a2=0 a3=ffff8b35e6c0 items=0 ppid=3263 pid=5892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:34.549019 kubelet[3105]: I0212 20:25:34.548977 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dzh99\" (UniqueName: \"kubernetes.io/projected/6f5e7353-e554-4c8d-a4a5-83a624c5349d-kube-api-access-dzh99\") pod \"calico-apiserver-7d5f545cdf-6f4m7\" (UID: \"6f5e7353-e554-4c8d-a4a5-83a624c5349d\") " pod="calico-apiserver/calico-apiserver-7d5f545cdf-6f4m7" Feb 12 20:25:34.549354 kubelet[3105]: I0212 20:25:34.549331 3105 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6f5e7353-e554-4c8d-a4a5-83a624c5349d-calico-apiserver-certs\") pod \"calico-apiserver-7d5f545cdf-6f4m7\" (UID: \"6f5e7353-e554-4c8d-a4a5-83a624c5349d\") " pod="calico-apiserver/calico-apiserver-7d5f545cdf-6f4m7" Feb 12 20:25:34.558001 kernel: audit: type=1300 audit(1707769534.534:461): arch=c00000b7 syscall=211 success=yes exit=10364 a0=3 a1=ffffd197ab40 a2=0 a3=ffff8b35e6c0 items=0 ppid=3263 pid=5892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:34.534000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:34.564613 kernel: audit: type=1327 audit(1707769534.534:461): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:34.537000 audit[5892]: NETFILTER_CFG table=nat:130 family=2 entries=78 op=nft_register_rule pid=5892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:34.571818 kernel: audit: type=1325 audit(1707769534.537:462): table=nat:130 family=2 entries=78 op=nft_register_rule pid=5892 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:34.537000 audit[5892]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd197ab40 a2=0 a3=ffff8b35e6c0 items=0 ppid=3263 pid=5892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:34.586427 kernel: audit: type=1300 audit(1707769534.537:462): arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffd197ab40 a2=0 a3=ffff8b35e6c0 items=0 ppid=3263 pid=5892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:34.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:34.593824 kernel: audit: type=1327 audit(1707769534.537:462): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:34.650802 kubelet[3105]: E0212 20:25:34.650643 3105 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Feb 12 20:25:34.650802 kubelet[3105]: E0212 20:25:34.650772 3105 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6f5e7353-e554-4c8d-a4a5-83a624c5349d-calico-apiserver-certs podName:6f5e7353-e554-4c8d-a4a5-83a624c5349d nodeName:}" failed. No retries permitted until 2024-02-12 20:25:35.150742187 +0000 UTC m=+127.859422760 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/6f5e7353-e554-4c8d-a4a5-83a624c5349d-calico-apiserver-certs") pod "calico-apiserver-7d5f545cdf-6f4m7" (UID: "6f5e7353-e554-4c8d-a4a5-83a624c5349d") : secret "calico-apiserver-certs" not found Feb 12 20:25:34.831000 audit[5919]: NETFILTER_CFG table=filter:131 family=2 entries=31 op=nft_register_rule pid=5919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:34.840588 kernel: audit: type=1325 audit(1707769534.831:463): table=filter:131 family=2 entries=31 op=nft_register_rule pid=5919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:34.831000 audit[5919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=ffffc73a4540 a2=0 a3=ffff9458a6c0 items=0 ppid=3263 pid=5919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:34.831000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:34.861383 kernel: audit: type=1300 audit(1707769534.831:463): arch=c00000b7 syscall=211 success=yes exit=11068 a0=3 a1=ffffc73a4540 a2=0 a3=ffff9458a6c0 items=0 ppid=3263 pid=5919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:34.861511 kernel: audit: type=1327 audit(1707769534.831:463): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:34.861576 kernel: audit: type=1325 audit(1707769534.855:464): table=nat:132 family=2 entries=78 op=nft_register_rule pid=5919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:34.855000 audit[5919]: NETFILTER_CFG table=nat:132 family=2 entries=78 op=nft_register_rule pid=5919 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:34.855000 audit[5919]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24988 a0=3 a1=ffffc73a4540 a2=0 a3=ffff9458a6c0 items=0 ppid=3263 pid=5919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:34.855000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:35.304876 env[1824]: time="2024-02-12T20:25:35.304791925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5f545cdf-6f4m7,Uid:6f5e7353-e554-4c8d-a4a5-83a624c5349d,Namespace:calico-apiserver,Attempt:0,}" Feb 12 20:25:35.526960 systemd-networkd[1611]: cali46e39a0b44c: Link UP Feb 12 20:25:35.532908 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 12 20:25:35.533096 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali46e39a0b44c: link becomes ready Feb 12 20:25:35.533704 systemd-networkd[1611]: cali46e39a0b44c: Gained carrier Feb 12 20:25:35.537352 (udev-worker)[5958]: Network interface NamePolicy= disabled on kernel command line. Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.381 [INFO][5940] plugin.go 327: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-eth0 calico-apiserver-7d5f545cdf- calico-apiserver 6f5e7353-e554-4c8d-a4a5-83a624c5349d 1100 0 2024-02-12 20:25:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7d5f545cdf projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-16-195 calico-apiserver-7d5f545cdf-6f4m7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali46e39a0b44c [] []}} ContainerID="68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-6f4m7" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-" Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.382 [INFO][5940] k8s.go 76: Extracted identifiers for CmdAddK8s ContainerID="68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-6f4m7" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-eth0" Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.430 [INFO][5952] ipam_plugin.go 228: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" HandleID="k8s-pod-network.68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" Workload="ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-eth0" Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.449 [INFO][5952] ipam_plugin.go 268: Auto assigning IP ContainerID="68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" HandleID="k8s-pod-network.68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" Workload="ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400025eb40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-16-195", "pod":"calico-apiserver-7d5f545cdf-6f4m7", "timestamp":"2024-02-12 20:25:35.430226989 +0000 UTC"}, Hostname:"ip-172-31-16-195", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.449 [INFO][5952] ipam_plugin.go 356: About to acquire host-wide IPAM lock. Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.450 [INFO][5952] ipam_plugin.go 371: Acquired host-wide IPAM lock. Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.450 [INFO][5952] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-16-195' Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.452 [INFO][5952] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" host="ip-172-31-16-195" Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.461 [INFO][5952] ipam.go 372: Looking up existing affinities for host host="ip-172-31-16-195" Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.468 [INFO][5952] ipam.go 489: Trying affinity for 192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.472 [INFO][5952] ipam.go 155: Attempting to load block cidr=192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.476 [INFO][5952] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.126.0/26 host="ip-172-31-16-195" Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.476 [INFO][5952] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.126.0/26 handle="k8s-pod-network.68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" host="ip-172-31-16-195" Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.480 [INFO][5952] ipam.go 1682: Creating new handle: k8s-pod-network.68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.498 [INFO][5952] ipam.go 1203: Writing block in order to claim IPs block=192.168.126.0/26 handle="k8s-pod-network.68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" host="ip-172-31-16-195" Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.516 [INFO][5952] ipam.go 1216: Successfully claimed IPs: [192.168.126.5/26] block=192.168.126.0/26 handle="k8s-pod-network.68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" host="ip-172-31-16-195" Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.517 [INFO][5952] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.126.5/26] handle="k8s-pod-network.68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" host="ip-172-31-16-195" Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.517 [INFO][5952] ipam_plugin.go 377: Released host-wide IPAM lock. Feb 12 20:25:35.564457 env[1824]: 2024-02-12 20:25:35.517 [INFO][5952] ipam_plugin.go 286: Calico CNI IPAM assigned addresses IPv4=[192.168.126.5/26] IPv6=[] ContainerID="68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" HandleID="k8s-pod-network.68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" Workload="ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-eth0" Feb 12 20:25:35.565962 env[1824]: 2024-02-12 20:25:35.520 [INFO][5940] k8s.go 385: Populated endpoint ContainerID="68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-6f4m7" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-eth0", GenerateName:"calico-apiserver-7d5f545cdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f5e7353-e554-4c8d-a4a5-83a624c5349d", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 25, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5f545cdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"", Pod:"calico-apiserver-7d5f545cdf-6f4m7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46e39a0b44c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:35.565962 env[1824]: 2024-02-12 20:25:35.520 [INFO][5940] k8s.go 386: Calico CNI using IPs: [192.168.126.5/32] ContainerID="68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-6f4m7" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-eth0" Feb 12 20:25:35.565962 env[1824]: 2024-02-12 20:25:35.520 [INFO][5940] dataplane_linux.go 68: Setting the host side veth name to cali46e39a0b44c ContainerID="68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-6f4m7" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-eth0" Feb 12 20:25:35.565962 env[1824]: 2024-02-12 20:25:35.533 [INFO][5940] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-6f4m7" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-eth0" Feb 12 20:25:35.565962 env[1824]: 2024-02-12 20:25:35.535 [INFO][5940] k8s.go 413: Added Mac, interface name, and active container ID to endpoint ContainerID="68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-6f4m7" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-eth0", GenerateName:"calico-apiserver-7d5f545cdf-", Namespace:"calico-apiserver", SelfLink:"", UID:"6f5e7353-e554-4c8d-a4a5-83a624c5349d", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2024, time.February, 12, 20, 25, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7d5f545cdf", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-16-195", ContainerID:"68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e", Pod:"calico-apiserver-7d5f545cdf-6f4m7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.126.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali46e39a0b44c", MAC:"9a:21:1e:e4:26:0a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 12 20:25:35.565962 env[1824]: 2024-02-12 20:25:35.552 [INFO][5940] k8s.go 491: Wrote updated endpoint to datastore ContainerID="68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e" Namespace="calico-apiserver" Pod="calico-apiserver-7d5f545cdf-6f4m7" WorkloadEndpoint="ip--172--31--16--195-k8s-calico--apiserver--7d5f545cdf--6f4m7-eth0" Feb 12 20:25:35.632081 env[1824]: time="2024-02-12T20:25:35.631960862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 12 20:25:35.632388 env[1824]: time="2024-02-12T20:25:35.632317475Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 12 20:25:35.632632 env[1824]: time="2024-02-12T20:25:35.632525745Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 12 20:25:35.635502 env[1824]: time="2024-02-12T20:25:35.635323342Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e pid=5980 runtime=io.containerd.runc.v2 Feb 12 20:25:35.811398 systemd[1]: run-containerd-runc-k8s.io-68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e-runc.MkHbnD.mount: Deactivated successfully. Feb 12 20:25:35.841000 audit[6004]: NETFILTER_CFG table=filter:133 family=2 entries=55 op=nft_register_chain pid=6004 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Feb 12 20:25:35.841000 audit[6004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28088 a0=3 a1=ffffc15695a0 a2=0 a3=ffff8669dfa8 items=0 ppid=4673 pid=6004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:35.841000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Feb 12 20:25:35.910085 env[1824]: time="2024-02-12T20:25:35.910022512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7d5f545cdf-6f4m7,Uid:6f5e7353-e554-4c8d-a4a5-83a624c5349d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e\"" Feb 12 20:25:35.913976 env[1824]: time="2024-02-12T20:25:35.913898047Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\"" Feb 12 20:25:36.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.16.195:22-147.75.109.163:32802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.113988 systemd[1]: Started sshd@23-172.31.16.195:22-147.75.109.163:32802.service. Feb 12 20:25:36.286000 audit[6018]: USER_ACCT pid=6018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:36.288888 sshd[6018]: Accepted publickey for core from 147.75.109.163 port 32802 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:36.289000 audit[6018]: CRED_ACQ pid=6018 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:36.289000 audit[6018]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd8b55da0 a2=3 a3=1 items=0 ppid=1 pid=6018 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:36.289000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:36.292033 sshd[6018]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:36.301840 systemd[1]: Started session-24.scope. Feb 12 20:25:36.303120 systemd-logind[1800]: New session 24 of user core. Feb 12 20:25:36.319000 audit[6018]: USER_START pid=6018 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:36.321000 audit[6021]: CRED_ACQ pid=6021 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:36.561704 sshd[6018]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:36.563000 audit[6018]: USER_END pid=6018 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:36.563000 audit[6018]: CRED_DISP pid=6018 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:36.568615 systemd[1]: sshd@23-172.31.16.195:22-147.75.109.163:32802.service: Deactivated successfully. Feb 12 20:25:36.567000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.16.195:22-147.75.109.163:32802 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:36.569900 systemd-logind[1800]: Session 24 logged out. Waiting for processes to exit. Feb 12 20:25:36.571757 systemd[1]: session-24.scope: Deactivated successfully. Feb 12 20:25:36.573937 systemd-logind[1800]: Removed session 24. Feb 12 20:25:36.701475 systemd-networkd[1611]: cali46e39a0b44c: Gained IPv6LL Feb 12 20:25:37.279916 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1220018944.mount: Deactivated successfully. Feb 12 20:25:38.790000 audit[6056]: NETFILTER_CFG table=filter:134 family=2 entries=20 op=nft_register_rule pid=6056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:38.790000 audit[6056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffefeed0c0 a2=0 a3=ffffbd03a6c0 items=0 ppid=3263 pid=6056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:38.790000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:38.803000 audit[6056]: NETFILTER_CFG table=nat:135 family=2 entries=162 op=nft_register_chain pid=6056 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:38.803000 audit[6056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffefeed0c0 a2=0 a3=ffffbd03a6c0 items=0 ppid=3263 pid=6056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:38.803000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:39.233149 env[1824]: time="2024-02-12T20:25:39.233093652Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:39.237039 env[1824]: time="2024-02-12T20:25:39.236975188Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:39.240221 env[1824]: time="2024-02-12T20:25:39.240158283Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.27.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:39.242700 env[1824]: time="2024-02-12T20:25:39.242637642Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:5ff0bdc8d0b2e9d7819703b18867f60f9153ed01da81e2bbfa22002abec9dc26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 12 20:25:39.244174 env[1824]: time="2024-02-12T20:25:39.244101426Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.27.0\" returns image reference \"sha256:24494ef6c7de0e2dcf21ad9fb6c94801c53f120443e256a5e1b54eccd57058a9\"" Feb 12 20:25:39.249461 env[1824]: time="2024-02-12T20:25:39.249400463Z" level=info msg="CreateContainer within sandbox \"68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Feb 12 20:25:39.271821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount446773261.mount: Deactivated successfully. Feb 12 20:25:39.279210 env[1824]: time="2024-02-12T20:25:39.279122206Z" level=info msg="CreateContainer within sandbox \"68ef9c86846f6ced109bde6bbdd3af42faac9d84c0196a94e8bc67882948364e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ac9560aff1d2671b8dce1ef436f18f73b8cdc22e1ba33a05dc3c8c6ed39e1648\"" Feb 12 20:25:39.280275 env[1824]: time="2024-02-12T20:25:39.280224157Z" level=info msg="StartContainer for \"ac9560aff1d2671b8dce1ef436f18f73b8cdc22e1ba33a05dc3c8c6ed39e1648\"" Feb 12 20:25:39.440610 env[1824]: time="2024-02-12T20:25:39.439139476Z" level=info msg="StartContainer for \"ac9560aff1d2671b8dce1ef436f18f73b8cdc22e1ba33a05dc3c8c6ed39e1648\" returns successfully" Feb 12 20:25:39.759044 kernel: kauditd_printk_skb: 22 callbacks suppressed Feb 12 20:25:39.759216 kernel: audit: type=1325 audit(1707769539.749:477): table=filter:136 family=2 entries=8 op=nft_register_rule pid=6122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:39.749000 audit[6122]: NETFILTER_CFG table=filter:136 family=2 entries=8 op=nft_register_rule pid=6122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:39.749000 audit[6122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffff6bace0 a2=0 a3=ffff95f766c0 items=0 ppid=3263 pid=6122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:39.774381 kernel: audit: type=1300 audit(1707769539.749:477): arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffff6bace0 a2=0 a3=ffff95f766c0 items=0 ppid=3263 pid=6122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:39.749000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:39.780228 kernel: audit: type=1327 audit(1707769539.749:477): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:39.780312 kernel: audit: type=1325 audit(1707769539.766:478): table=nat:137 family=2 entries=198 op=nft_register_rule pid=6122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:39.766000 audit[6122]: NETFILTER_CFG table=nat:137 family=2 entries=198 op=nft_register_rule pid=6122 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:39.766000 audit[6122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffff6bace0 a2=0 a3=ffff95f766c0 items=0 ppid=3263 pid=6122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:39.797849 kernel: audit: type=1300 audit(1707769539.766:478): arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffff6bace0 a2=0 a3=ffff95f766c0 items=0 ppid=3263 pid=6122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:39.766000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:39.803572 kernel: audit: type=1327 audit(1707769539.766:478): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:41.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.16.195:22-147.75.109.163:32808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:41.589581 systemd[1]: Started sshd@24-172.31.16.195:22-147.75.109.163:32808.service. Feb 12 20:25:41.599595 kernel: audit: type=1130 audit(1707769541.588:479): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.16.195:22-147.75.109.163:32808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:41.768000 audit[6131]: USER_ACCT pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:41.770452 sshd[6131]: Accepted publickey for core from 147.75.109.163 port 32808 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:41.781679 kernel: audit: type=1101 audit(1707769541.768:480): pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:41.780000 audit[6131]: CRED_ACQ pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:41.783281 sshd[6131]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:41.798422 kernel: audit: type=1103 audit(1707769541.780:481): pid=6131 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:41.798593 kernel: audit: type=1006 audit(1707769541.780:482): pid=6131 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Feb 12 20:25:41.780000 audit[6131]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffea167c30 a2=3 a3=1 items=0 ppid=1 pid=6131 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:41.780000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:41.807012 systemd[1]: Started session-25.scope. Feb 12 20:25:41.807443 systemd-logind[1800]: New session 25 of user core. Feb 12 20:25:41.817000 audit[6131]: USER_START pid=6131 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:41.820000 audit[6134]: CRED_ACQ pid=6134 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:42.092829 sshd[6131]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:42.093000 audit[6131]: USER_END pid=6131 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:42.094000 audit[6131]: CRED_DISP pid=6131 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:42.099488 systemd[1]: sshd@24-172.31.16.195:22-147.75.109.163:32808.service: Deactivated successfully. Feb 12 20:25:42.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.16.195:22-147.75.109.163:32808 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:42.101926 systemd[1]: session-25.scope: Deactivated successfully. Feb 12 20:25:42.102880 systemd-logind[1800]: Session 25 logged out. Waiting for processes to exit. Feb 12 20:25:42.106010 systemd-logind[1800]: Removed session 25. Feb 12 20:25:42.404000 audit[6169]: NETFILTER_CFG table=filter:138 family=2 entries=8 op=nft_register_rule pid=6169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:42.404000 audit[6169]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2620 a0=3 a1=ffffee404e00 a2=0 a3=ffffaa6cd6c0 items=0 ppid=3263 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:42.404000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:42.413000 audit[6169]: NETFILTER_CFG table=nat:139 family=2 entries=198 op=nft_register_rule pid=6169 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:25:42.413000 audit[6169]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=66940 a0=3 a1=ffffee404e00 a2=0 a3=ffffaa6cd6c0 items=0 ppid=3263 pid=6169 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:42.413000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:25:47.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.16.195:22-147.75.109.163:53680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:47.117709 systemd[1]: Started sshd@25-172.31.16.195:22-147.75.109.163:53680.service. Feb 12 20:25:47.121041 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 12 20:25:47.121197 kernel: audit: type=1130 audit(1707769547.116:490): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.16.195:22-147.75.109.163:53680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:47.299000 audit[6198]: USER_ACCT pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:47.309889 sshd[6198]: Accepted publickey for core from 147.75.109.163 port 53680 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:47.311661 kernel: audit: type=1101 audit(1707769547.299:491): pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:47.311000 audit[6198]: CRED_ACQ pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:47.313887 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:47.328858 kernel: audit: type=1103 audit(1707769547.311:492): pid=6198 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:47.328985 kernel: audit: type=1006 audit(1707769547.311:493): pid=6198 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Feb 12 20:25:47.329039 kernel: audit: type=1300 audit(1707769547.311:493): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcb699d80 a2=3 a3=1 items=0 ppid=1 pid=6198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:47.311000 audit[6198]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcb699d80 a2=3 a3=1 items=0 ppid=1 pid=6198 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:47.311000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:47.342938 kernel: audit: type=1327 audit(1707769547.311:493): proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:47.348646 systemd-logind[1800]: New session 26 of user core. Feb 12 20:25:47.350709 systemd[1]: Started session-26.scope. Feb 12 20:25:47.360000 audit[6198]: USER_START pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:47.363000 audit[6201]: CRED_ACQ pid=6201 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:47.385589 kernel: audit: type=1105 audit(1707769547.360:494): pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:47.385772 kernel: audit: type=1103 audit(1707769547.363:495): pid=6201 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:47.597065 sshd[6198]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:47.597000 audit[6198]: USER_END pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:47.603592 systemd[1]: sshd@25-172.31.16.195:22-147.75.109.163:53680.service: Deactivated successfully. Feb 12 20:25:47.605111 systemd[1]: session-26.scope: Deactivated successfully. Feb 12 20:25:47.599000 audit[6198]: CRED_DISP pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:47.612679 systemd-logind[1800]: Session 26 logged out. Waiting for processes to exit. Feb 12 20:25:47.614640 systemd-logind[1800]: Removed session 26. Feb 12 20:25:47.620289 kernel: audit: type=1106 audit(1707769547.597:496): pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:47.620505 kernel: audit: type=1104 audit(1707769547.599:497): pid=6198 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:47.602000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.16.195:22-147.75.109.163:53680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:52.622952 systemd[1]: Started sshd@26-172.31.16.195:22-147.75.109.163:53692.service. Feb 12 20:25:52.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.16.195:22-147.75.109.163:53692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:52.629570 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 20:25:52.629714 kernel: audit: type=1130 audit(1707769552.622:499): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.16.195:22-147.75.109.163:53692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:52.790000 audit[6211]: USER_ACCT pid=6211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:52.791922 sshd[6211]: Accepted publickey for core from 147.75.109.163 port 53692 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:52.795030 sshd[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:52.792000 audit[6211]: CRED_ACQ pid=6211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:52.811946 kernel: audit: type=1101 audit(1707769552.790:500): pid=6211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:52.812120 kernel: audit: type=1103 audit(1707769552.792:501): pid=6211 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:52.818314 kernel: audit: type=1006 audit(1707769552.792:502): pid=6211 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Feb 12 20:25:52.792000 audit[6211]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffea978c0 a2=3 a3=1 items=0 ppid=1 pid=6211 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:52.828631 kernel: audit: type=1300 audit(1707769552.792:502): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffea978c0 a2=3 a3=1 items=0 ppid=1 pid=6211 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:52.792000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:52.836040 kernel: audit: type=1327 audit(1707769552.792:502): proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:52.838155 systemd[1]: Started session-27.scope. Feb 12 20:25:52.838732 systemd-logind[1800]: New session 27 of user core. Feb 12 20:25:52.857000 audit[6211]: USER_START pid=6211 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:52.862000 audit[6214]: CRED_ACQ pid=6214 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:52.881980 kernel: audit: type=1105 audit(1707769552.857:503): pid=6211 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:52.882169 kernel: audit: type=1103 audit(1707769552.862:504): pid=6214 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:53.174871 sshd[6211]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:53.175000 audit[6211]: USER_END pid=6211 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:53.179985 systemd[1]: sshd@26-172.31.16.195:22-147.75.109.163:53692.service: Deactivated successfully. Feb 12 20:25:53.181500 systemd[1]: session-27.scope: Deactivated successfully. Feb 12 20:25:53.190084 systemd-logind[1800]: Session 27 logged out. Waiting for processes to exit. Feb 12 20:25:53.192021 systemd-logind[1800]: Removed session 27. Feb 12 20:25:53.175000 audit[6211]: CRED_DISP pid=6211 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:53.203563 kernel: audit: type=1106 audit(1707769553.175:505): pid=6211 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:53.203743 kernel: audit: type=1104 audit(1707769553.175:506): pid=6211 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:53.178000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.16.195:22-147.75.109.163:53692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:58.202178 systemd[1]: Started sshd@27-172.31.16.195:22-147.75.109.163:58712.service. Feb 12 20:25:58.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.16.195:22-147.75.109.163:58712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:58.207593 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 20:25:58.207712 kernel: audit: type=1130 audit(1707769558.202:508): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.16.195:22-147.75.109.163:58712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:58.380000 audit[6228]: USER_ACCT pid=6228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:58.384468 sshd[6228]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:25:58.385368 sshd[6228]: Accepted publickey for core from 147.75.109.163 port 58712 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:25:58.381000 audit[6228]: CRED_ACQ pid=6228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:58.400869 kernel: audit: type=1101 audit(1707769558.380:509): pid=6228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:58.400987 kernel: audit: type=1103 audit(1707769558.381:510): pid=6228 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:58.408331 kernel: audit: type=1006 audit(1707769558.381:511): pid=6228 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Feb 12 20:25:58.408463 kernel: audit: type=1300 audit(1707769558.381:511): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcaf1d0c0 a2=3 a3=1 items=0 ppid=1 pid=6228 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:58.381000 audit[6228]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcaf1d0c0 a2=3 a3=1 items=0 ppid=1 pid=6228 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:25:58.424935 kernel: audit: type=1327 audit(1707769558.381:511): proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:58.381000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:25:58.424560 systemd[1]: Started session-28.scope. Feb 12 20:25:58.428480 systemd-logind[1800]: New session 28 of user core. Feb 12 20:25:58.455000 audit[6228]: USER_START pid=6228 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:58.460000 audit[6232]: CRED_ACQ pid=6232 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:58.476679 kernel: audit: type=1105 audit(1707769558.455:512): pid=6228 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:58.476847 kernel: audit: type=1103 audit(1707769558.460:513): pid=6232 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:58.702316 sshd[6228]: pam_unix(sshd:session): session closed for user core Feb 12 20:25:58.704000 audit[6228]: USER_END pid=6228 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:58.708247 systemd[1]: sshd@27-172.31.16.195:22-147.75.109.163:58712.service: Deactivated successfully. Feb 12 20:25:58.709958 systemd[1]: session-28.scope: Deactivated successfully. Feb 12 20:25:58.705000 audit[6228]: CRED_DISP pid=6228 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:58.717320 systemd-logind[1800]: Session 28 logged out. Waiting for processes to exit. Feb 12 20:25:58.726378 kernel: audit: type=1106 audit(1707769558.704:514): pid=6228 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:58.726590 kernel: audit: type=1104 audit(1707769558.705:515): pid=6228 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:25:58.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.16.195:22-147.75.109.163:58712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:25:58.728498 systemd-logind[1800]: Removed session 28. Feb 12 20:26:03.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-172.31.16.195:22-147.75.109.163:58718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:03.727192 systemd[1]: Started sshd@28-172.31.16.195:22-147.75.109.163:58718.service. Feb 12 20:26:03.730482 kernel: kauditd_printk_skb: 1 callbacks suppressed Feb 12 20:26:03.730535 kernel: audit: type=1130 audit(1707769563.726:517): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-172.31.16.195:22-147.75.109.163:58718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:03.902000 audit[6252]: USER_ACCT pid=6252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:03.908163 sshd[6252]: Accepted publickey for core from 147.75.109.163 port 58718 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:26:03.911420 sshd[6252]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:03.908000 audit[6252]: CRED_ACQ pid=6252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:03.923141 kernel: audit: type=1101 audit(1707769563.902:518): pid=6252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:03.923243 kernel: audit: type=1103 audit(1707769563.908:519): pid=6252 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:03.923312 kernel: audit: type=1006 audit(1707769563.908:520): pid=6252 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Feb 12 20:26:03.922161 systemd[1]: Started session-29.scope. Feb 12 20:26:03.924623 systemd-logind[1800]: New session 29 of user core. Feb 12 20:26:03.929807 kernel: audit: type=1300 audit(1707769563.908:520): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4725e80 a2=3 a3=1 items=0 ppid=1 pid=6252 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:03.908000 audit[6252]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4725e80 a2=3 a3=1 items=0 ppid=1 pid=6252 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:03.944248 kernel: audit: type=1327 audit(1707769563.908:520): proctitle=737368643A20636F7265205B707269765D Feb 12 20:26:03.908000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:26:03.950000 audit[6252]: USER_START pid=6252 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:03.950000 audit[6255]: CRED_ACQ pid=6255 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:03.971603 kernel: audit: type=1105 audit(1707769563.950:521): pid=6252 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:03.971743 kernel: audit: type=1103 audit(1707769563.950:522): pid=6255 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:04.179042 sshd[6252]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:04.181000 audit[6252]: USER_END pid=6252 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:04.186126 systemd[1]: sshd@28-172.31.16.195:22-147.75.109.163:58718.service: Deactivated successfully. Feb 12 20:26:04.188000 systemd[1]: session-29.scope: Deactivated successfully. Feb 12 20:26:04.194407 systemd-logind[1800]: Session 29 logged out. Waiting for processes to exit. Feb 12 20:26:04.181000 audit[6252]: CRED_DISP pid=6252 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:04.204437 kernel: audit: type=1106 audit(1707769564.181:523): pid=6252 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:04.204602 kernel: audit: type=1104 audit(1707769564.181:524): pid=6252 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:04.205497 systemd-logind[1800]: Removed session 29. Feb 12 20:26:04.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-172.31.16.195:22-147.75.109.163:58718 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:05.354715 systemd[1]: run-containerd-runc-k8s.io-ac9560aff1d2671b8dce1ef436f18f73b8cdc22e1ba33a05dc3c8c6ed39e1648-runc.ql0UvX.mount: Deactivated successfully. Feb 12 20:26:05.480439 kubelet[3105]: I0212 20:26:05.480381 3105 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7d5f545cdf-6f4m7" podStartSLOduration=-9.223372005374483e+09 pod.CreationTimestamp="2024-02-12 20:25:34 +0000 UTC" firstStartedPulling="2024-02-12 20:25:35.912674598 +0000 UTC m=+128.621355171" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-12 20:25:39.627583687 +0000 UTC m=+132.336264260" watchObservedRunningTime="2024-02-12 20:26:05.480293629 +0000 UTC m=+158.188974202" Feb 12 20:26:05.569000 audit[6330]: NETFILTER_CFG table=filter:140 family=2 entries=7 op=nft_register_rule pid=6330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:26:05.569000 audit[6330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd38b5190 a2=0 a3=ffffb357b6c0 items=0 ppid=3263 pid=6330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:05.569000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:26:05.578000 audit[6330]: NETFILTER_CFG table=nat:141 family=2 entries=205 op=nft_register_chain pid=6330 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:26:05.578000 audit[6330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=70436 a0=3 a1=ffffd38b5190 a2=0 a3=ffffb357b6c0 items=0 ppid=3263 pid=6330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:05.578000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:26:05.661000 audit[6356]: NETFILTER_CFG table=filter:142 family=2 entries=6 op=nft_register_rule pid=6356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:26:05.661000 audit[6356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe68875c0 a2=0 a3=ffffb61ec6c0 items=0 ppid=3263 pid=6356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:05.661000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:26:05.670000 audit[6356]: NETFILTER_CFG table=nat:143 family=2 entries=212 op=nft_register_chain pid=6356 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Feb 12 20:26:05.670000 audit[6356]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=72324 a0=3 a1=ffffe68875c0 a2=0 a3=ffffb61ec6c0 items=0 ppid=3263 pid=6356 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:05.670000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Feb 12 20:26:09.206786 systemd[1]: Started sshd@29-172.31.16.195:22-147.75.109.163:55942.service. Feb 12 20:26:09.218856 kernel: kauditd_printk_skb: 13 callbacks suppressed Feb 12 20:26:09.219007 kernel: audit: type=1130 audit(1707769569.207:530): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-172.31.16.195:22-147.75.109.163:55942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:09.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-172.31.16.195:22-147.75.109.163:55942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:09.389000 audit[6357]: USER_ACCT pid=6357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:09.400987 sshd[6357]: Accepted publickey for core from 147.75.109.163 port 55942 ssh2: RSA SHA256:ecUhSIJgyplxxRcBUTSxTp+B0aPr5wgDdA3tvIID0Hc Feb 12 20:26:09.401724 kernel: audit: type=1101 audit(1707769569.389:531): pid=6357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:09.405864 sshd[6357]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 12 20:26:09.404000 audit[6357]: CRED_ACQ pid=6357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:09.423967 kernel: audit: type=1103 audit(1707769569.404:532): pid=6357 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:09.424211 kernel: audit: type=1006 audit(1707769569.404:533): pid=6357 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=30 res=1 Feb 12 20:26:09.404000 audit[6357]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc057bbd0 a2=3 a3=1 items=0 ppid=1 pid=6357 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:09.431750 systemd[1]: Started session-30.scope. Feb 12 20:26:09.434114 systemd-logind[1800]: New session 30 of user core. Feb 12 20:26:09.437982 kernel: audit: type=1300 audit(1707769569.404:533): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc057bbd0 a2=3 a3=1 items=0 ppid=1 pid=6357 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=30 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 12 20:26:09.404000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Feb 12 20:26:09.445406 kernel: audit: type=1327 audit(1707769569.404:533): proctitle=737368643A20636F7265205B707269765D Feb 12 20:26:09.456000 audit[6357]: USER_START pid=6357 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:09.457000 audit[6360]: CRED_ACQ pid=6360 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:09.480085 kernel: audit: type=1105 audit(1707769569.456:534): pid=6357 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:09.480269 kernel: audit: type=1103 audit(1707769569.457:535): pid=6360 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:09.717319 sshd[6357]: pam_unix(sshd:session): session closed for user core Feb 12 20:26:09.719000 audit[6357]: USER_END pid=6357 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:09.724864 systemd[1]: sshd@29-172.31.16.195:22-147.75.109.163:55942.service: Deactivated successfully. Feb 12 20:26:09.726318 systemd[1]: session-30.scope: Deactivated successfully. Feb 12 20:26:09.735020 systemd-logind[1800]: Session 30 logged out. Waiting for processes to exit. Feb 12 20:26:09.719000 audit[6357]: CRED_DISP pid=6357 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:09.746425 kernel: audit: type=1106 audit(1707769569.719:536): pid=6357 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:09.747524 kernel: audit: type=1104 audit(1707769569.719:537): pid=6357 uid=0 auid=500 ses=30 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Feb 12 20:26:09.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@29-172.31.16.195:22-147.75.109.163:55942 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 12 20:26:09.748938 systemd-logind[1800]: Removed session 30. Feb 12 20:26:16.509902 systemd[1]: run-containerd-runc-k8s.io-af901d6703a0a98cfe9ca7d379bb6b0db9c836db509fea0f0b516342eee99ff1-runc.9p0DlO.mount: Deactivated successfully. Feb 12 20:26:22.992144 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22d7566e29110cd8177ceb162e778efdd5171c2798eee4aed57f1eb22c16f88c-rootfs.mount: Deactivated successfully. Feb 12 20:26:22.994518 env[1824]: time="2024-02-12T20:26:22.994408463Z" level=info msg="shim disconnected" id=22d7566e29110cd8177ceb162e778efdd5171c2798eee4aed57f1eb22c16f88c Feb 12 20:26:22.995138 env[1824]: time="2024-02-12T20:26:22.994599581Z" level=warning msg="cleaning up after shim disconnected" id=22d7566e29110cd8177ceb162e778efdd5171c2798eee4aed57f1eb22c16f88c namespace=k8s.io Feb 12 20:26:22.995138 env[1824]: time="2024-02-12T20:26:22.994626281Z" level=info msg="cleaning up dead shim" Feb 12 20:26:23.009436 env[1824]: time="2024-02-12T20:26:23.009363260Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6408 runtime=io.containerd.runc.v2\n" Feb 12 20:26:23.305848 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0f4bd3c49044ee4d70c0c9f510f75edabcd08a81d5e1230239308a7ac14d3e2-rootfs.mount: Deactivated successfully. Feb 12 20:26:23.307926 env[1824]: time="2024-02-12T20:26:23.307619616Z" level=info msg="shim disconnected" id=a0f4bd3c49044ee4d70c0c9f510f75edabcd08a81d5e1230239308a7ac14d3e2 Feb 12 20:26:23.307926 env[1824]: time="2024-02-12T20:26:23.307700091Z" level=warning msg="cleaning up after shim disconnected" id=a0f4bd3c49044ee4d70c0c9f510f75edabcd08a81d5e1230239308a7ac14d3e2 namespace=k8s.io Feb 12 20:26:23.307926 env[1824]: time="2024-02-12T20:26:23.307728820Z" level=info msg="cleaning up dead shim" Feb 12 20:26:23.322520 env[1824]: time="2024-02-12T20:26:23.322444770Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6436 runtime=io.containerd.runc.v2\n" Feb 12 20:26:23.732169 kubelet[3105]: I0212 20:26:23.732137 3105 scope.go:115] "RemoveContainer" containerID="a0f4bd3c49044ee4d70c0c9f510f75edabcd08a81d5e1230239308a7ac14d3e2" Feb 12 20:26:23.736811 env[1824]: time="2024-02-12T20:26:23.736711581Z" level=info msg="CreateContainer within sandbox \"751a4b8ac067d6cb50053a0a6ce7b131cab6b7f71d3bdad9fc0ba43506c30a66\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Feb 12 20:26:23.737330 kubelet[3105]: I0212 20:26:23.737299 3105 scope.go:115] "RemoveContainer" containerID="22d7566e29110cd8177ceb162e778efdd5171c2798eee4aed57f1eb22c16f88c" Feb 12 20:26:23.742506 env[1824]: time="2024-02-12T20:26:23.742452738Z" level=info msg="CreateContainer within sandbox \"c796e58187421bb2180b332e3dbc4f71a9041a756b652b9273ee387a507b6c08\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 12 20:26:23.762838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount919884188.mount: Deactivated successfully. Feb 12 20:26:23.777722 env[1824]: time="2024-02-12T20:26:23.777516654Z" level=info msg="CreateContainer within sandbox \"751a4b8ac067d6cb50053a0a6ce7b131cab6b7f71d3bdad9fc0ba43506c30a66\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"a73f437b592e832093c1fa185b1d3e76b2140cddf362cb3631727c48650619b6\"" Feb 12 20:26:23.778895 env[1824]: time="2024-02-12T20:26:23.778821691Z" level=info msg="StartContainer for \"a73f437b592e832093c1fa185b1d3e76b2140cddf362cb3631727c48650619b6\"" Feb 12 20:26:23.785092 env[1824]: time="2024-02-12T20:26:23.784724985Z" level=info msg="CreateContainer within sandbox \"c796e58187421bb2180b332e3dbc4f71a9041a756b652b9273ee387a507b6c08\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"7e9c275689a4b478daf3363e825999c55330acc1302e9e650f3c9818b0d3ff15\"" Feb 12 20:26:23.785751 env[1824]: time="2024-02-12T20:26:23.785709733Z" level=info msg="StartContainer for \"7e9c275689a4b478daf3363e825999c55330acc1302e9e650f3c9818b0d3ff15\"" Feb 12 20:26:23.919161 env[1824]: time="2024-02-12T20:26:23.919053246Z" level=info msg="StartContainer for \"a73f437b592e832093c1fa185b1d3e76b2140cddf362cb3631727c48650619b6\" returns successfully" Feb 12 20:26:23.948414 env[1824]: time="2024-02-12T20:26:23.948331831Z" level=info msg="StartContainer for \"7e9c275689a4b478daf3363e825999c55330acc1302e9e650f3c9818b0d3ff15\" returns successfully" Feb 12 20:26:28.548482 systemd[1]: run-containerd-runc-k8s.io-f75fe05dfa83a08f915c792e63d3e189bf2ff83990a680a9c14bf480a17197d9-runc.gIX7z7.mount: Deactivated successfully. Feb 12 20:26:29.770514 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b243f487fd3ecc2db60dc64698be67881952f61dfe615bb47ee6ef9eac33188b-rootfs.mount: Deactivated successfully. Feb 12 20:26:29.772884 env[1824]: time="2024-02-12T20:26:29.772804647Z" level=info msg="shim disconnected" id=b243f487fd3ecc2db60dc64698be67881952f61dfe615bb47ee6ef9eac33188b Feb 12 20:26:29.773642 env[1824]: time="2024-02-12T20:26:29.773533894Z" level=warning msg="cleaning up after shim disconnected" id=b243f487fd3ecc2db60dc64698be67881952f61dfe615bb47ee6ef9eac33188b namespace=k8s.io Feb 12 20:26:29.773786 env[1824]: time="2024-02-12T20:26:29.773758168Z" level=info msg="cleaning up dead shim" Feb 12 20:26:29.788026 env[1824]: time="2024-02-12T20:26:29.787972947Z" level=warning msg="cleanup warnings time=\"2024-02-12T20:26:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=6557 runtime=io.containerd.runc.v2\n" Feb 12 20:26:30.623030 kubelet[3105]: E0212 20:26:30.622717 3105 controller.go:189] failed to update lease, error: Put "https://172.31.16.195:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-16-195?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 12 20:26:30.764747 kubelet[3105]: I0212 20:26:30.764701 3105 scope.go:115] "RemoveContainer" containerID="b243f487fd3ecc2db60dc64698be67881952f61dfe615bb47ee6ef9eac33188b" Feb 12 20:26:30.768222 env[1824]: time="2024-02-12T20:26:30.768151485Z" level=info msg="CreateContainer within sandbox \"f3217b85b9a9738b94ad0822ca26bf28004c5d12b617550f2a1cad13ef06f78b\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 12 20:26:30.793590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount22112793.mount: Deactivated successfully. Feb 12 20:26:30.808362 env[1824]: time="2024-02-12T20:26:30.808299967Z" level=info msg="CreateContainer within sandbox \"f3217b85b9a9738b94ad0822ca26bf28004c5d12b617550f2a1cad13ef06f78b\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"3ccd0dd0ac6039a440f72cfba198e7ad8c895b128bfeea597f79ea41c670c6ca\"" Feb 12 20:26:30.809766 env[1824]: time="2024-02-12T20:26:30.809723263Z" level=info msg="StartContainer for \"3ccd0dd0ac6039a440f72cfba198e7ad8c895b128bfeea597f79ea41c670c6ca\"" Feb 12 20:26:30.938105 env[1824]: time="2024-02-12T20:26:30.937983599Z" level=info msg="StartContainer for \"3ccd0dd0ac6039a440f72cfba198e7ad8c895b128bfeea597f79ea41c670c6ca\" returns successfully"