Jun 25 14:16:34.105332 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jun 25 14:16:34.105370 kernel: Linux version 6.1.95-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT Tue Jun 25 13:19:44 -00 2024 Jun 25 14:16:34.105393 kernel: efi: EFI v2.70 by EDK II Jun 25 14:16:34.105408 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7ac1aa98 MEMRESERVE=0x78553e18 Jun 25 14:16:34.105422 kernel: ACPI: Early table checksum verification disabled Jun 25 14:16:34.105435 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jun 25 14:16:34.105452 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jun 25 14:16:34.105466 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jun 25 14:16:34.105480 kernel: ACPI: DSDT 0x0000000078640000 00159D (v02 AMAZON AMZNDSDT 00000001 INTL 20160527) Jun 25 14:16:34.105493 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jun 25 14:16:34.105511 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jun 25 14:16:34.105524 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jun 25 14:16:34.105538 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jun 25 14:16:34.105552 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jun 25 14:16:34.105568 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jun 25 14:16:34.105587 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jun 25 14:16:34.105602 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jun 25 14:16:34.106763 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jun 25 14:16:34.106784 kernel: printk: bootconsole [uart0] enabled Jun 25 14:16:34.106799 kernel: NUMA: Failed to initialise from firmware Jun 25 14:16:34.106814 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jun 25 14:16:34.107232 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jun 25 14:16:34.107248 kernel: Zone ranges: Jun 25 14:16:34.107263 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jun 25 14:16:34.107278 kernel: DMA32 empty Jun 25 14:16:34.107292 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jun 25 14:16:34.107314 kernel: Movable zone start for each node Jun 25 14:16:34.107329 kernel: Early memory node ranges Jun 25 14:16:34.107343 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jun 25 14:16:34.107358 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jun 25 14:16:34.107372 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jun 25 14:16:34.107386 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jun 25 14:16:34.107400 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jun 25 14:16:34.107414 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jun 25 14:16:34.107428 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jun 25 14:16:34.107442 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jun 25 14:16:34.107457 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jun 25 14:16:34.107471 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jun 25 14:16:34.107489 kernel: psci: probing for conduit method from ACPI. Jun 25 14:16:34.107504 kernel: psci: PSCIv1.0 detected in firmware. Jun 25 14:16:34.107526 kernel: psci: Using standard PSCI v0.2 function IDs Jun 25 14:16:34.107542 kernel: psci: Trusted OS migration not required Jun 25 14:16:34.107557 kernel: psci: SMC Calling Convention v1.1 Jun 25 14:16:34.107576 kernel: percpu: Embedded 30 pages/cpu s83880 r8192 d30808 u122880 Jun 25 14:16:34.107592 kernel: pcpu-alloc: s83880 r8192 d30808 u122880 alloc=30*4096 Jun 25 14:16:34.107623 kernel: pcpu-alloc: [0] 0 [0] 1 Jun 25 14:16:34.109695 kernel: Detected PIPT I-cache on CPU0 Jun 25 14:16:34.109717 kernel: CPU features: detected: GIC system register CPU interface Jun 25 14:16:34.109732 kernel: CPU features: detected: Spectre-v2 Jun 25 14:16:34.109748 kernel: CPU features: detected: Spectre-v3a Jun 25 14:16:34.109764 kernel: CPU features: detected: Spectre-BHB Jun 25 14:16:34.109779 kernel: CPU features: kernel page table isolation forced ON by KASLR Jun 25 14:16:34.109795 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jun 25 14:16:34.109810 kernel: CPU features: detected: ARM erratum 1742098 Jun 25 14:16:34.109825 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jun 25 14:16:34.109850 kernel: alternatives: applying boot alternatives Jun 25 14:16:34.109865 kernel: Fallback order for Node 0: 0 Jun 25 14:16:34.109880 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jun 25 14:16:34.109895 kernel: Policy zone: Normal Jun 25 14:16:34.109914 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:16:34.109932 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 25 14:16:34.109947 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 25 14:16:34.109963 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 25 14:16:34.109978 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 25 14:16:34.109994 kernel: software IO TLB: area num 2. Jun 25 14:16:34.110014 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jun 25 14:16:34.110033 kernel: Memory: 3825596K/4030464K available (9984K kernel code, 2108K rwdata, 7720K rodata, 34688K init, 894K bss, 204868K reserved, 0K cma-reserved) Jun 25 14:16:34.110048 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jun 25 14:16:34.110064 kernel: trace event string verifier disabled Jun 25 14:16:34.110079 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 25 14:16:34.110096 kernel: rcu: RCU event tracing is enabled. Jun 25 14:16:34.110113 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jun 25 14:16:34.110129 kernel: Trampoline variant of Tasks RCU enabled. Jun 25 14:16:34.110145 kernel: Tracing variant of Tasks RCU enabled. Jun 25 14:16:34.110160 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 25 14:16:34.110175 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jun 25 14:16:34.110194 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jun 25 14:16:34.110209 kernel: GICv3: 96 SPIs implemented Jun 25 14:16:34.110224 kernel: GICv3: 0 Extended SPIs implemented Jun 25 14:16:34.110239 kernel: Root IRQ handler: gic_handle_irq Jun 25 14:16:34.110254 kernel: GICv3: GICv3 features: 16 PPIs Jun 25 14:16:34.110269 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jun 25 14:16:34.110284 kernel: ITS [mem 0x10080000-0x1009ffff] Jun 25 14:16:34.110299 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000a0000 (indirect, esz 8, psz 64K, shr 1) Jun 25 14:16:34.110314 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000b0000 (flat, esz 8, psz 64K, shr 1) Jun 25 14:16:34.110329 kernel: GICv3: using LPI property table @0x00000004000c0000 Jun 25 14:16:34.110344 kernel: ITS: Using hypervisor restricted LPI range [128] Jun 25 14:16:34.110359 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000d0000 Jun 25 14:16:34.110378 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 25 14:16:34.110393 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jun 25 14:16:34.110408 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jun 25 14:16:34.110424 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jun 25 14:16:34.110439 kernel: Console: colour dummy device 80x25 Jun 25 14:16:34.110454 kernel: printk: console [tty1] enabled Jun 25 14:16:34.110470 kernel: ACPI: Core revision 20220331 Jun 25 14:16:34.110485 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jun 25 14:16:34.110501 kernel: pid_max: default: 32768 minimum: 301 Jun 25 14:16:34.110516 kernel: LSM: Security Framework initializing Jun 25 14:16:34.110536 kernel: SELinux: Initializing. Jun 25 14:16:34.110552 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:16:34.110568 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 25 14:16:34.110583 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:16:34.110599 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 14:16:34.110635 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jun 25 14:16:34.110653 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jun 25 14:16:34.110691 kernel: rcu: Hierarchical SRCU implementation. Jun 25 14:16:34.110709 kernel: rcu: Max phase no-delay instances is 400. Jun 25 14:16:34.110738 kernel: Platform MSI: ITS@0x10080000 domain created Jun 25 14:16:34.110921 kernel: PCI/MSI: ITS@0x10080000 domain created Jun 25 14:16:34.110938 kernel: Remapping and enabling EFI services. Jun 25 14:16:34.110954 kernel: smp: Bringing up secondary CPUs ... Jun 25 14:16:34.110969 kernel: Detected PIPT I-cache on CPU1 Jun 25 14:16:34.110984 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jun 25 14:16:34.111000 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000e0000 Jun 25 14:16:34.111015 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jun 25 14:16:34.111031 kernel: smp: Brought up 1 node, 2 CPUs Jun 25 14:16:34.111052 kernel: SMP: Total of 2 processors activated. Jun 25 14:16:34.111068 kernel: CPU features: detected: 32-bit EL0 Support Jun 25 14:16:34.111095 kernel: CPU features: detected: 32-bit EL1 Support Jun 25 14:16:34.111115 kernel: CPU features: detected: CRC32 instructions Jun 25 14:16:34.111131 kernel: CPU: All CPU(s) started at EL1 Jun 25 14:16:34.111147 kernel: alternatives: applying system-wide alternatives Jun 25 14:16:34.111163 kernel: devtmpfs: initialized Jun 25 14:16:34.111179 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 25 14:16:34.111200 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jun 25 14:16:34.111216 kernel: pinctrl core: initialized pinctrl subsystem Jun 25 14:16:34.111232 kernel: SMBIOS 3.0.0 present. Jun 25 14:16:34.111249 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jun 25 14:16:34.111265 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 25 14:16:34.111281 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jun 25 14:16:34.111298 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jun 25 14:16:34.111314 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jun 25 14:16:34.111330 kernel: audit: initializing netlink subsys (disabled) Jun 25 14:16:34.111350 kernel: audit: type=2000 audit(0.250:1): state=initialized audit_enabled=0 res=1 Jun 25 14:16:34.111367 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 25 14:16:34.111383 kernel: cpuidle: using governor menu Jun 25 14:16:34.111399 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jun 25 14:16:34.111415 kernel: ASID allocator initialised with 32768 entries Jun 25 14:16:34.111431 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 25 14:16:34.111447 kernel: Serial: AMBA PL011 UART driver Jun 25 14:16:34.111463 kernel: KASLR enabled Jun 25 14:16:34.111479 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 25 14:16:34.111499 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jun 25 14:16:34.111516 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jun 25 14:16:34.111532 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jun 25 14:16:34.111548 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 25 14:16:34.111564 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jun 25 14:16:34.111580 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jun 25 14:16:34.111596 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jun 25 14:16:34.111645 kernel: ACPI: Added _OSI(Module Device) Jun 25 14:16:34.111665 kernel: ACPI: Added _OSI(Processor Device) Jun 25 14:16:34.111687 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jun 25 14:16:34.111704 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 25 14:16:34.111720 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 25 14:16:34.111736 kernel: ACPI: Interpreter enabled Jun 25 14:16:34.111752 kernel: ACPI: Using GIC for interrupt routing Jun 25 14:16:34.111768 kernel: ACPI: MCFG table detected, 1 entries Jun 25 14:16:34.111784 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-0f]) Jun 25 14:16:34.112084 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 25 14:16:34.112287 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jun 25 14:16:34.112476 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jun 25 14:16:34.123570 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x20ffffff] reserved by PNP0C02:00 Jun 25 14:16:34.123851 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x20ffffff] for [bus 00-0f] Jun 25 14:16:34.123893 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jun 25 14:16:34.123915 kernel: acpiphp: Slot [1] registered Jun 25 14:16:34.123933 kernel: acpiphp: Slot [2] registered Jun 25 14:16:34.123950 kernel: acpiphp: Slot [3] registered Jun 25 14:16:34.123975 kernel: acpiphp: Slot [4] registered Jun 25 14:16:34.123992 kernel: acpiphp: Slot [5] registered Jun 25 14:16:34.124008 kernel: acpiphp: Slot [6] registered Jun 25 14:16:34.124024 kernel: acpiphp: Slot [7] registered Jun 25 14:16:34.124039 kernel: acpiphp: Slot [8] registered Jun 25 14:16:34.124056 kernel: acpiphp: Slot [9] registered Jun 25 14:16:34.124072 kernel: acpiphp: Slot [10] registered Jun 25 14:16:34.124088 kernel: acpiphp: Slot [11] registered Jun 25 14:16:34.124104 kernel: acpiphp: Slot [12] registered Jun 25 14:16:34.124119 kernel: acpiphp: Slot [13] registered Jun 25 14:16:34.124140 kernel: acpiphp: Slot [14] registered Jun 25 14:16:34.124156 kernel: acpiphp: Slot [15] registered Jun 25 14:16:34.124172 kernel: acpiphp: Slot [16] registered Jun 25 14:16:34.124187 kernel: acpiphp: Slot [17] registered Jun 25 14:16:34.124203 kernel: acpiphp: Slot [18] registered Jun 25 14:16:34.124219 kernel: acpiphp: Slot [19] registered Jun 25 14:16:34.124235 kernel: acpiphp: Slot [20] registered Jun 25 14:16:34.124251 kernel: acpiphp: Slot [21] registered Jun 25 14:16:34.124267 kernel: acpiphp: Slot [22] registered Jun 25 14:16:34.124287 kernel: acpiphp: Slot [23] registered Jun 25 14:16:34.124304 kernel: acpiphp: Slot [24] registered Jun 25 14:16:34.124320 kernel: acpiphp: Slot [25] registered Jun 25 14:16:34.124335 kernel: acpiphp: Slot [26] registered Jun 25 14:16:34.124351 kernel: acpiphp: Slot [27] registered Jun 25 14:16:34.124367 kernel: acpiphp: Slot [28] registered Jun 25 14:16:34.124383 kernel: acpiphp: Slot [29] registered Jun 25 14:16:34.124399 kernel: acpiphp: Slot [30] registered Jun 25 14:16:34.124415 kernel: acpiphp: Slot [31] registered Jun 25 14:16:34.124431 kernel: PCI host bridge to bus 0000:00 Jun 25 14:16:34.124662 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jun 25 14:16:34.124848 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jun 25 14:16:34.125021 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jun 25 14:16:34.125196 kernel: pci_bus 0000:00: root bus resource [bus 00-0f] Jun 25 14:16:34.125418 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jun 25 14:16:34.125680 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jun 25 14:16:34.125900 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jun 25 14:16:34.126112 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jun 25 14:16:34.126331 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jun 25 14:16:34.126538 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 25 14:16:34.126802 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jun 25 14:16:34.126999 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jun 25 14:16:34.127196 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jun 25 14:16:34.127399 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jun 25 14:16:34.127646 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jun 25 14:16:34.127846 kernel: pci 0000:00:05.0: BAR 2: assigned [mem 0x80000000-0x800fffff pref] Jun 25 14:16:34.128060 kernel: pci 0000:00:05.0: BAR 4: assigned [mem 0x80100000-0x8010ffff] Jun 25 14:16:34.128261 kernel: pci 0000:00:04.0: BAR 0: assigned [mem 0x80110000-0x80113fff] Jun 25 14:16:34.128457 kernel: pci 0000:00:05.0: BAR 0: assigned [mem 0x80114000-0x80117fff] Jun 25 14:16:34.128714 kernel: pci 0000:00:01.0: BAR 0: assigned [mem 0x80118000-0x80118fff] Jun 25 14:16:34.128903 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jun 25 14:16:34.129083 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jun 25 14:16:34.129263 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jun 25 14:16:34.129286 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jun 25 14:16:34.129303 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jun 25 14:16:34.129320 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jun 25 14:16:34.129337 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jun 25 14:16:34.129354 kernel: iommu: Default domain type: Translated Jun 25 14:16:34.129377 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jun 25 14:16:34.129393 kernel: pps_core: LinuxPPS API ver. 1 registered Jun 25 14:16:34.129410 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jun 25 14:16:34.129427 kernel: PTP clock support registered Jun 25 14:16:34.129443 kernel: Registered efivars operations Jun 25 14:16:34.129459 kernel: vgaarb: loaded Jun 25 14:16:34.129475 kernel: clocksource: Switched to clocksource arch_sys_counter Jun 25 14:16:34.129492 kernel: VFS: Disk quotas dquot_6.6.0 Jun 25 14:16:34.129508 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 25 14:16:34.129529 kernel: pnp: PnP ACPI init Jun 25 14:16:34.129898 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jun 25 14:16:34.129928 kernel: pnp: PnP ACPI: found 1 devices Jun 25 14:16:34.129945 kernel: NET: Registered PF_INET protocol family Jun 25 14:16:34.129962 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 25 14:16:34.129979 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 25 14:16:34.129995 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 25 14:16:34.130011 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 25 14:16:34.130034 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 25 14:16:34.130050 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 25 14:16:34.130067 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:16:34.130083 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 25 14:16:34.130099 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 25 14:16:34.130115 kernel: PCI: CLS 0 bytes, default 64 Jun 25 14:16:34.130131 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jun 25 14:16:34.130147 kernel: kvm [1]: HYP mode not available Jun 25 14:16:34.130163 kernel: Initialise system trusted keyrings Jun 25 14:16:34.130183 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 25 14:16:34.130200 kernel: Key type asymmetric registered Jun 25 14:16:34.130216 kernel: Asymmetric key parser 'x509' registered Jun 25 14:16:34.130232 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jun 25 14:16:34.130248 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jun 25 14:16:34.130264 kernel: io scheduler mq-deadline registered Jun 25 14:16:34.130280 kernel: io scheduler kyber registered Jun 25 14:16:34.130297 kernel: io scheduler bfq registered Jun 25 14:16:34.130505 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jun 25 14:16:34.130535 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jun 25 14:16:34.130552 kernel: ACPI: button: Power Button [PWRB] Jun 25 14:16:34.130568 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jun 25 14:16:34.130584 kernel: ACPI: button: Sleep Button [SLPB] Jun 25 14:16:34.130600 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 25 14:16:34.130639 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jun 25 14:16:34.130859 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jun 25 14:16:34.130884 kernel: printk: console [ttyS0] disabled Jun 25 14:16:34.130906 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jun 25 14:16:34.130923 kernel: printk: console [ttyS0] enabled Jun 25 14:16:34.130939 kernel: printk: bootconsole [uart0] disabled Jun 25 14:16:34.130967 kernel: thunder_xcv, ver 1.0 Jun 25 14:16:34.130985 kernel: thunder_bgx, ver 1.0 Jun 25 14:16:34.131001 kernel: nicpf, ver 1.0 Jun 25 14:16:34.131017 kernel: nicvf, ver 1.0 Jun 25 14:16:34.131218 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jun 25 14:16:34.131409 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-06-25T14:16:33 UTC (1719324993) Jun 25 14:16:34.131440 kernel: hid: raw HID events driver (C) Jiri Kosina Jun 25 14:16:34.131456 kernel: NET: Registered PF_INET6 protocol family Jun 25 14:16:34.131473 kernel: Segment Routing with IPv6 Jun 25 14:16:34.131489 kernel: In-situ OAM (IOAM) with IPv6 Jun 25 14:16:34.131505 kernel: NET: Registered PF_PACKET protocol family Jun 25 14:16:34.131521 kernel: Key type dns_resolver registered Jun 25 14:16:34.131537 kernel: registered taskstats version 1 Jun 25 14:16:34.131553 kernel: Loading compiled-in X.509 certificates Jun 25 14:16:34.131569 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.95-flatcar: 0fa2e892f90caac26ef50b6d7e7f5c106b0c7e83' Jun 25 14:16:34.131590 kernel: Key type .fscrypt registered Jun 25 14:16:34.131606 kernel: Key type fscrypt-provisioning registered Jun 25 14:16:34.131661 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 25 14:16:34.131678 kernel: ima: Allocated hash algorithm: sha1 Jun 25 14:16:34.131694 kernel: ima: No architecture policies found Jun 25 14:16:34.131710 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jun 25 14:16:34.131727 kernel: clk: Disabling unused clocks Jun 25 14:16:34.131743 kernel: Freeing unused kernel memory: 34688K Jun 25 14:16:34.131759 kernel: Run /init as init process Jun 25 14:16:34.131781 kernel: with arguments: Jun 25 14:16:34.131797 kernel: /init Jun 25 14:16:34.131812 kernel: with environment: Jun 25 14:16:34.131828 kernel: HOME=/ Jun 25 14:16:34.131853 kernel: TERM=linux Jun 25 14:16:34.131888 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 25 14:16:34.131913 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:16:34.131935 systemd[1]: Detected virtualization amazon. Jun 25 14:16:34.131959 systemd[1]: Detected architecture arm64. Jun 25 14:16:34.131976 systemd[1]: Running in initrd. Jun 25 14:16:34.131993 systemd[1]: No hostname configured, using default hostname. Jun 25 14:16:34.132010 systemd[1]: Hostname set to . Jun 25 14:16:34.132028 systemd[1]: Initializing machine ID from VM UUID. Jun 25 14:16:34.132045 systemd[1]: Queued start job for default target initrd.target. Jun 25 14:16:34.132063 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:16:34.132080 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:16:34.132102 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:16:34.132119 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:16:34.132137 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:16:34.132154 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:16:34.132172 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:16:34.132190 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:16:34.132207 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 14:16:34.132230 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 14:16:34.132247 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 14:16:34.132265 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:16:34.132283 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:16:34.132300 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:16:34.132317 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:16:34.132335 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:16:34.132352 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 25 14:16:34.132374 systemd[1]: Starting systemd-fsck-usr.service... Jun 25 14:16:34.132392 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:16:34.132409 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:16:34.132427 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jun 25 14:16:34.132444 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:16:34.132462 systemd[1]: Finished systemd-fsck-usr.service. Jun 25 14:16:34.132479 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:16:34.132497 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:16:34.132514 kernel: audit: type=1130 audit(1719324994.097:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.132536 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 25 14:16:34.132554 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:16:34.132572 kernel: audit: type=1130 audit(1719324994.117:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.132592 systemd-journald[242]: Journal started Jun 25 14:16:34.132716 systemd-journald[242]: Runtime Journal (/run/log/journal/ec20a2aafb301534647df830bf063805) is 8.0M, max 75.3M, 67.3M free. Jun 25 14:16:34.137254 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:16:34.137316 kernel: audit: type=1130 audit(1719324994.134:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.085950 systemd-modules-load[243]: Inserted module 'overlay' Jun 25 14:16:34.149727 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 25 14:16:34.149765 kernel: Bridge firewalling registered Jun 25 14:16:34.145930 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:16:34.154978 systemd-modules-load[243]: Inserted module 'br_netfilter' Jun 25 14:16:34.175116 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:16:34.183797 kernel: audit: type=1130 audit(1719324994.176:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.186361 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 25 14:16:34.194521 kernel: SCSI subsystem initialized Jun 25 14:16:34.201757 kernel: audit: type=1130 audit(1719324994.195:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.194398 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:16:34.204000 audit: BPF prog-id=6 op=LOAD Jun 25 14:16:34.209689 kernel: audit: type=1334 audit(1719324994.204:7): prog-id=6 op=LOAD Jun 25 14:16:34.210927 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:16:34.229327 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 25 14:16:34.229396 kernel: device-mapper: uevent: version 1.0.3 Jun 25 14:16:34.229420 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jun 25 14:16:34.229452 dracut-cmdline[263]: dracut-dracut-053 Jun 25 14:16:34.229452 dracut-cmdline[263]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=db17b63e45e8142dc1ecd7dada86314b84dd868576326a7134a62617b1dac6e8 Jun 25 14:16:34.246497 systemd-modules-load[243]: Inserted module 'dm_multipath' Jun 25 14:16:34.251201 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:16:34.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.259666 kernel: audit: type=1130 audit(1719324994.251:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.264868 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:16:34.297910 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:16:34.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.306925 kernel: audit: type=1130 audit(1719324994.300:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.338251 systemd-resolved[268]: Positive Trust Anchors: Jun 25 14:16:34.340075 systemd-resolved[268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:16:34.342833 systemd-resolved[268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:16:34.391654 kernel: Loading iSCSI transport class v2.0-870. Jun 25 14:16:34.404650 kernel: iscsi: registered transport (tcp) Jun 25 14:16:34.427657 kernel: iscsi: registered transport (qla4xxx) Jun 25 14:16:34.427737 kernel: QLogic iSCSI HBA Driver Jun 25 14:16:34.542338 systemd-resolved[268]: Defaulting to hostname 'linux'. Jun 25 14:16:34.544046 kernel: random: crng init done Jun 25 14:16:34.545990 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:16:34.556084 kernel: audit: type=1130 audit(1719324994.546:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.548202 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:16:34.568228 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 25 14:16:34.570000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.581021 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 25 14:16:34.664755 kernel: raid6: neonx8 gen() 6659 MB/s Jun 25 14:16:34.681696 kernel: raid6: neonx4 gen() 6487 MB/s Jun 25 14:16:34.698646 kernel: raid6: neonx2 gen() 5455 MB/s Jun 25 14:16:34.715642 kernel: raid6: neonx1 gen() 3956 MB/s Jun 25 14:16:34.732642 kernel: raid6: int64x8 gen() 3788 MB/s Jun 25 14:16:34.749641 kernel: raid6: int64x4 gen() 3713 MB/s Jun 25 14:16:34.766642 kernel: raid6: int64x2 gen() 3604 MB/s Jun 25 14:16:34.784328 kernel: raid6: int64x1 gen() 2780 MB/s Jun 25 14:16:34.784359 kernel: raid6: using algorithm neonx8 gen() 6659 MB/s Jun 25 14:16:34.802299 kernel: raid6: .... xor() 4869 MB/s, rmw enabled Jun 25 14:16:34.802342 kernel: raid6: using neon recovery algorithm Jun 25 14:16:34.810645 kernel: xor: measuring software checksum speed Jun 25 14:16:34.811652 kernel: 8regs : 10981 MB/sec Jun 25 14:16:34.813641 kernel: 32regs : 11968 MB/sec Jun 25 14:16:34.815904 kernel: arm64_neon : 9544 MB/sec Jun 25 14:16:34.815934 kernel: xor: using function: 32regs (11968 MB/sec) Jun 25 14:16:34.906667 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Jun 25 14:16:34.928218 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:16:34.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.931000 audit: BPF prog-id=7 op=LOAD Jun 25 14:16:34.931000 audit: BPF prog-id=8 op=LOAD Jun 25 14:16:34.938916 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:16:34.976523 systemd-udevd[444]: Using default interface naming scheme 'v252'. Jun 25 14:16:34.986880 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:16:34.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:34.995916 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 25 14:16:35.027819 dracut-pre-trigger[448]: rd.md=0: removing MD RAID activation Jun 25 14:16:35.095268 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:16:35.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:35.103354 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:16:35.205230 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:16:35.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:35.336637 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jun 25 14:16:35.336711 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jun 25 14:16:35.359885 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jun 25 14:16:35.360121 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jun 25 14:16:35.360324 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80114000, mac addr 06:e8:4d:c9:d4:ab Jun 25 14:16:35.360539 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jun 25 14:16:35.362301 kernel: nvme nvme0: pci function 0000:00:04.0 Jun 25 14:16:35.364598 (udev-worker)[507]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:16:35.373662 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jun 25 14:16:35.383111 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 25 14:16:35.383166 kernel: GPT:9289727 != 16777215 Jun 25 14:16:35.383189 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 25 14:16:35.384165 kernel: GPT:9289727 != 16777215 Jun 25 14:16:35.384795 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 25 14:16:35.386650 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 14:16:35.478647 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/nvme0n1p6 scanned by (udev-worker) (504) Jun 25 14:16:35.499135 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jun 25 14:16:35.519663 kernel: BTRFS: device fsid 4f04fb4d-edd3-40b1-b587-481b761003a7 devid 1 transid 33 /dev/nvme0n1p3 scanned by (udev-worker) (498) Jun 25 14:16:35.552727 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 14:16:35.614701 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jun 25 14:16:35.627451 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jun 25 14:16:35.632052 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jun 25 14:16:35.656359 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 25 14:16:35.674656 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 14:16:35.676757 disk-uuid[608]: Primary Header is updated. Jun 25 14:16:35.676757 disk-uuid[608]: Secondary Entries is updated. Jun 25 14:16:35.676757 disk-uuid[608]: Secondary Header is updated. Jun 25 14:16:35.698650 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 14:16:36.709650 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jun 25 14:16:36.710045 disk-uuid[609]: The operation has completed successfully. Jun 25 14:16:36.893673 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 25 14:16:36.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:36.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:36.893877 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 25 14:16:36.922127 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 25 14:16:36.930040 sh[951]: Success Jun 25 14:16:36.961726 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jun 25 14:16:37.077869 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 25 14:16:37.084026 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 25 14:16:37.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.091878 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 25 14:16:37.118458 kernel: BTRFS info (device dm-0): first mount of filesystem 4f04fb4d-edd3-40b1-b587-481b761003a7 Jun 25 14:16:37.118520 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:16:37.118544 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jun 25 14:16:37.121218 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jun 25 14:16:37.121261 kernel: BTRFS info (device dm-0): using free space tree Jun 25 14:16:37.219647 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jun 25 14:16:37.241905 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 25 14:16:37.242268 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 25 14:16:37.251513 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 25 14:16:37.256431 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 25 14:16:37.293344 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:16:37.293410 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:16:37.294589 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 14:16:37.312823 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 14:16:37.324728 systemd[1]: mnt-oem.mount: Deactivated successfully. Jun 25 14:16:37.328639 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:16:37.337304 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 25 14:16:37.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.344938 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 25 14:16:37.427272 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:16:37.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.431000 audit: BPF prog-id=9 op=LOAD Jun 25 14:16:37.438461 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:16:37.484830 systemd-networkd[1141]: lo: Link UP Jun 25 14:16:37.484854 systemd-networkd[1141]: lo: Gained carrier Jun 25 14:16:37.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.486697 systemd-networkd[1141]: Enumeration completed Jun 25 14:16:37.486861 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:16:37.489431 systemd[1]: Reached target network.target - Network. Jun 25 14:16:37.489870 systemd-networkd[1141]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:16:37.489877 systemd-networkd[1141]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:16:37.506053 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:16:37.513372 systemd-networkd[1141]: eth0: Link UP Jun 25 14:16:37.513386 systemd-networkd[1141]: eth0: Gained carrier Jun 25 14:16:37.513400 systemd-networkd[1141]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:16:37.538074 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:16:37.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.543767 systemd-networkd[1141]: eth0: DHCPv4 address 172.31.29.41/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 14:16:37.553479 systemd[1]: Starting iscsid.service - Open-iSCSI... Jun 25 14:16:37.562672 iscsid[1146]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:16:37.562672 iscsid[1146]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jun 25 14:16:37.562672 iscsid[1146]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jun 25 14:16:37.562672 iscsid[1146]: If using hardware iscsi like qla4xxx this message can be ignored. Jun 25 14:16:37.562672 iscsid[1146]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jun 25 14:16:37.562672 iscsid[1146]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jun 25 14:16:37.581000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.576184 systemd[1]: Started iscsid.service - Open-iSCSI. Jun 25 14:16:37.598544 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 25 14:16:37.623450 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 25 14:16:37.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.625794 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:16:37.629328 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:16:37.639931 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:16:37.651696 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 25 14:16:37.676681 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:16:37.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.696312 ignition[1077]: Ignition 2.15.0 Jun 25 14:16:37.696975 ignition[1077]: Stage: fetch-offline Jun 25 14:16:37.699518 ignition[1077]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:16:37.700357 ignition[1077]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:16:37.702138 ignition[1077]: Ignition finished successfully Jun 25 14:16:37.706222 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:16:37.706000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.714546 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jun 25 14:16:37.735905 ignition[1168]: Ignition 2.15.0 Jun 25 14:16:37.736373 ignition[1168]: Stage: fetch Jun 25 14:16:37.736878 ignition[1168]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:16:37.736903 ignition[1168]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:16:37.737080 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:16:37.750542 ignition[1168]: PUT result: OK Jun 25 14:16:37.753637 ignition[1168]: parsed url from cmdline: "" Jun 25 14:16:37.753762 ignition[1168]: no config URL provided Jun 25 14:16:37.754544 ignition[1168]: reading system config file "/usr/lib/ignition/user.ign" Jun 25 14:16:37.754575 ignition[1168]: no config at "/usr/lib/ignition/user.ign" Jun 25 14:16:37.755220 ignition[1168]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:16:37.760821 ignition[1168]: PUT result: OK Jun 25 14:16:37.761910 ignition[1168]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jun 25 14:16:37.764206 ignition[1168]: GET result: OK Jun 25 14:16:37.765199 ignition[1168]: parsing config with SHA512: 90701f58b0067a475f37647f84e98189fcf2cda11889d2ce165584c9b1a3705242463a673da568dde1169fc923b85d4313f30da4ac8582eb937db0bc2d8319cb Jun 25 14:16:37.776592 unknown[1168]: fetched base config from "system" Jun 25 14:16:37.777108 unknown[1168]: fetched base config from "system" Jun 25 14:16:37.777123 unknown[1168]: fetched user config from "aws" Jun 25 14:16:37.781891 ignition[1168]: fetch: fetch complete Jun 25 14:16:37.782050 ignition[1168]: fetch: fetch passed Jun 25 14:16:37.782165 ignition[1168]: Ignition finished successfully Jun 25 14:16:37.788870 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jun 25 14:16:37.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.796879 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 25 14:16:37.821424 ignition[1174]: Ignition 2.15.0 Jun 25 14:16:37.821451 ignition[1174]: Stage: kargs Jun 25 14:16:37.822284 ignition[1174]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:16:37.822314 ignition[1174]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:16:37.822470 ignition[1174]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:16:37.824239 ignition[1174]: PUT result: OK Jun 25 14:16:37.846692 ignition[1174]: kargs: kargs passed Jun 25 14:16:37.846809 ignition[1174]: Ignition finished successfully Jun 25 14:16:37.850113 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 25 14:16:37.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.861162 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 25 14:16:37.884386 ignition[1180]: Ignition 2.15.0 Jun 25 14:16:37.885036 ignition[1180]: Stage: disks Jun 25 14:16:37.885683 ignition[1180]: no configs at "/usr/lib/ignition/base.d" Jun 25 14:16:37.885709 ignition[1180]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:16:37.885893 ignition[1180]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:16:37.888218 ignition[1180]: PUT result: OK Jun 25 14:16:37.897908 ignition[1180]: disks: disks passed Jun 25 14:16:37.898208 ignition[1180]: Ignition finished successfully Jun 25 14:16:37.902406 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 25 14:16:37.904802 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 25 14:16:37.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.912750 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:16:37.927537 kernel: kauditd_printk_skb: 21 callbacks suppressed Jun 25 14:16:37.927573 kernel: audit: type=1130 audit(1719324997.902:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.918222 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:16:37.920073 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:16:37.921938 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:16:37.939734 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 25 14:16:37.983105 systemd-fsck[1188]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jun 25 14:16:37.989165 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 25 14:16:37.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:37.997685 kernel: audit: type=1130 audit(1719324997.992:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:38.003142 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 25 14:16:38.082629 kernel: EXT4-fs (nvme0n1p9): mounted filesystem with ordered data mode. Quota mode: none. Jun 25 14:16:38.083299 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 25 14:16:38.084127 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 25 14:16:38.102129 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:16:38.107272 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 25 14:16:38.111429 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 25 14:16:38.113232 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 25 14:16:38.113303 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:16:38.128828 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 25 14:16:38.140000 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 25 14:16:38.142065 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/nvme0n1p6 scanned by mount (1205) Jun 25 14:16:38.150139 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:16:38.150198 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:16:38.151285 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 14:16:38.157645 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 14:16:38.161155 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:16:38.555266 initrd-setup-root[1229]: cut: /sysroot/etc/passwd: No such file or directory Jun 25 14:16:38.575671 initrd-setup-root[1236]: cut: /sysroot/etc/group: No such file or directory Jun 25 14:16:38.584451 initrd-setup-root[1243]: cut: /sysroot/etc/shadow: No such file or directory Jun 25 14:16:38.592809 initrd-setup-root[1250]: cut: /sysroot/etc/gshadow: No such file or directory Jun 25 14:16:38.854784 systemd-networkd[1141]: eth0: Gained IPv6LL Jun 25 14:16:38.904115 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 25 14:16:38.911708 kernel: audit: type=1130 audit(1719324998.905:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:38.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:38.917849 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 25 14:16:38.925892 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 25 14:16:38.935811 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 25 14:16:38.937521 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:16:38.969493 ignition[1316]: INFO : Ignition 2.15.0 Jun 25 14:16:38.971463 ignition[1316]: INFO : Stage: mount Jun 25 14:16:38.973074 ignition[1316]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:16:38.974969 ignition[1316]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:16:38.977201 ignition[1316]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:16:38.979972 ignition[1316]: INFO : PUT result: OK Jun 25 14:16:38.985246 ignition[1316]: INFO : mount: mount passed Jun 25 14:16:38.986783 ignition[1316]: INFO : Ignition finished successfully Jun 25 14:16:38.990027 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 25 14:16:38.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:38.997680 kernel: audit: type=1130 audit(1719324998.992:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:38.999837 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 25 14:16:39.010279 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 25 14:16:39.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:39.017661 kernel: audit: type=1130 audit(1719324999.011:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:39.028275 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 25 14:16:39.049650 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/nvme0n1p6 scanned by mount (1327) Jun 25 14:16:39.053957 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 2cf05490-8e39-46e6-bd3e-b9f42670b198 Jun 25 14:16:39.054006 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jun 25 14:16:39.055093 kernel: BTRFS info (device nvme0n1p6): using free space tree Jun 25 14:16:39.059628 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jun 25 14:16:39.063427 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 25 14:16:39.095491 ignition[1345]: INFO : Ignition 2.15.0 Jun 25 14:16:39.098430 ignition[1345]: INFO : Stage: files Jun 25 14:16:39.098430 ignition[1345]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:16:39.098430 ignition[1345]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:16:39.098430 ignition[1345]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:16:39.106025 ignition[1345]: INFO : PUT result: OK Jun 25 14:16:39.110427 ignition[1345]: DEBUG : files: compiled without relabeling support, skipping Jun 25 14:16:39.131286 ignition[1345]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 25 14:16:39.133818 ignition[1345]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 25 14:16:39.165436 ignition[1345]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 25 14:16:39.168138 ignition[1345]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 25 14:16:39.171199 unknown[1345]: wrote ssh authorized keys file for user: core Jun 25 14:16:39.173286 ignition[1345]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 25 14:16:39.177600 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 14:16:39.180795 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jun 25 14:16:39.180795 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:16:39.180795 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jun 25 14:16:39.246685 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 25 14:16:39.355266 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:16:39.360565 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-arm64.raw: attempt #1 Jun 25 14:16:39.822848 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 25 14:16:40.269135 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-arm64.raw" Jun 25 14:16:40.273349 ignition[1345]: INFO : files: op(c): [started] processing unit "containerd.service" Jun 25 14:16:40.276290 ignition[1345]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 14:16:40.280298 ignition[1345]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jun 25 14:16:40.280298 ignition[1345]: INFO : files: op(c): [finished] processing unit "containerd.service" Jun 25 14:16:40.280298 ignition[1345]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jun 25 14:16:40.280298 ignition[1345]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:16:40.280298 ignition[1345]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 25 14:16:40.280298 ignition[1345]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jun 25 14:16:40.280298 ignition[1345]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jun 25 14:16:40.280298 ignition[1345]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jun 25 14:16:40.280298 ignition[1345]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:16:40.280298 ignition[1345]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 25 14:16:40.280298 ignition[1345]: INFO : files: files passed Jun 25 14:16:40.280298 ignition[1345]: INFO : Ignition finished successfully Jun 25 14:16:40.323741 kernel: audit: type=1130 audit(1719325000.297:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.294044 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 25 14:16:40.313177 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 25 14:16:40.324003 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 25 14:16:40.336223 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 25 14:16:40.348951 kernel: audit: type=1130 audit(1719325000.337:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.348991 kernel: audit: type=1131 audit(1719325000.337:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.337000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.336455 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 25 14:16:40.356294 initrd-setup-root-after-ignition[1371]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:16:40.356294 initrd-setup-root-after-ignition[1371]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:16:40.362524 initrd-setup-root-after-ignition[1375]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 25 14:16:40.367271 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:16:40.370699 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 25 14:16:40.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.378755 kernel: audit: type=1130 audit(1719325000.367:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.388870 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 25 14:16:40.424755 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 25 14:16:40.425251 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 25 14:16:40.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.435425 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 25 14:16:40.437361 kernel: audit: type=1130 audit(1719325000.429:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.439268 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 25 14:16:40.443005 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 25 14:16:40.452945 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 25 14:16:40.478794 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:16:40.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.489751 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 25 14:16:40.510142 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:16:40.514476 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:16:40.518805 systemd[1]: Stopped target timers.target - Timer Units. Jun 25 14:16:40.522299 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 25 14:16:40.522733 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 25 14:16:40.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.528523 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 25 14:16:40.532538 systemd[1]: Stopped target basic.target - Basic System. Jun 25 14:16:40.536108 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 25 14:16:40.540227 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 25 14:16:40.544395 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 25 14:16:40.548576 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 25 14:16:40.552524 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 25 14:16:40.557003 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 25 14:16:40.561072 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 25 14:16:40.564949 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:16:40.569331 systemd[1]: Stopped target swap.target - Swaps. Jun 25 14:16:40.572419 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 25 14:16:40.572811 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 25 14:16:40.576000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.578342 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:16:40.582372 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 25 14:16:40.582754 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 25 14:16:40.586000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.588312 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 25 14:16:40.588761 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 25 14:16:40.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.594890 systemd[1]: ignition-files.service: Deactivated successfully. Jun 25 14:16:40.596740 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 25 14:16:40.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.609453 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 25 14:16:40.618585 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jun 25 14:16:40.626192 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 25 14:16:40.628081 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 25 14:16:40.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.628454 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:16:40.632859 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 25 14:16:40.635126 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 25 14:16:40.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.651811 systemd[1]: iscsiuio.service: Deactivated successfully. Jun 25 14:16:40.653537 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jun 25 14:16:40.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.666436 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 25 14:16:40.668267 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 25 14:16:40.675309 ignition[1389]: INFO : Ignition 2.15.0 Jun 25 14:16:40.677285 ignition[1389]: INFO : Stage: umount Jun 25 14:16:40.678690 ignition[1389]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 25 14:16:40.678690 ignition[1389]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jun 25 14:16:40.678690 ignition[1389]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jun 25 14:16:40.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.686469 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 25 14:16:40.690485 ignition[1389]: INFO : PUT result: OK Jun 25 14:16:40.702068 ignition[1389]: INFO : umount: umount passed Jun 25 14:16:40.706170 ignition[1389]: INFO : Ignition finished successfully Jun 25 14:16:40.705792 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 25 14:16:40.705984 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 25 14:16:40.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.715976 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 25 14:16:40.716090 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 25 14:16:40.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.726267 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 25 14:16:40.726532 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 25 14:16:40.732171 systemd[1]: ignition-fetch.service: Deactivated successfully. Jun 25 14:16:40.734044 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jun 25 14:16:40.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.738036 systemd[1]: Stopped target network.target - Network. Jun 25 14:16:40.741795 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 25 14:16:40.742088 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 25 14:16:40.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.748907 systemd[1]: Stopped target paths.target - Path Units. Jun 25 14:16:40.752183 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 25 14:16:40.757726 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:16:40.763557 systemd[1]: Stopped target slices.target - Slice Units. Jun 25 14:16:40.767175 systemd[1]: Stopped target sockets.target - Socket Units. Jun 25 14:16:40.771020 systemd[1]: iscsid.socket: Deactivated successfully. Jun 25 14:16:40.771542 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 25 14:16:40.779238 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 25 14:16:40.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.779338 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 25 14:16:40.781256 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 25 14:16:40.781350 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 25 14:16:40.783540 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 25 14:16:40.794527 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 25 14:16:40.799707 systemd-networkd[1141]: eth0: DHCPv6 lease lost Jun 25 14:16:40.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.800032 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 25 14:16:40.800259 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 25 14:16:40.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.812000 audit: BPF prog-id=6 op=UNLOAD Jun 25 14:16:40.813000 audit: BPF prog-id=9 op=UNLOAD Jun 25 14:16:40.810019 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 25 14:16:40.811891 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 25 14:16:40.815533 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 25 14:16:40.815674 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:16:40.832666 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 25 14:16:40.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.836005 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 25 14:16:40.836136 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 25 14:16:40.842358 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 25 14:16:40.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.842470 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:16:40.852407 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 25 14:16:40.852516 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 25 14:16:40.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.862950 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 25 14:16:40.863053 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:16:40.866745 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:16:40.884006 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 25 14:16:40.884187 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 25 14:16:40.898315 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 25 14:16:40.898663 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:16:40.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.906181 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 25 14:16:40.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.906401 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 25 14:16:40.920521 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 25 14:16:40.941000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.920738 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 25 14:16:40.924870 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 25 14:16:40.924948 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:16:40.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.929406 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 25 14:16:40.929512 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 25 14:16:40.932058 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 25 14:16:40.932160 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 25 14:16:40.936807 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 25 14:16:40.936912 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 25 14:16:40.950385 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 25 14:16:40.963887 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 25 14:16:40.964072 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:16:40.968943 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 25 14:16:40.969050 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:16:40.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.990086 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 25 14:16:40.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.998000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:41.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:40.990246 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jun 25 14:16:40.994132 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 25 14:16:40.995278 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 25 14:16:40.995518 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 25 14:16:40.997751 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 25 14:16:40.997962 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 25 14:16:41.000483 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 25 14:16:41.002465 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 25 14:16:41.002652 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 25 14:16:41.013918 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 25 14:16:41.061521 systemd[1]: Switching root. Jun 25 14:16:41.065000 audit: BPF prog-id=5 op=UNLOAD Jun 25 14:16:41.065000 audit: BPF prog-id=4 op=UNLOAD Jun 25 14:16:41.065000 audit: BPF prog-id=3 op=UNLOAD Jun 25 14:16:41.065000 audit: BPF prog-id=8 op=UNLOAD Jun 25 14:16:41.066000 audit: BPF prog-id=7 op=UNLOAD Jun 25 14:16:41.084840 systemd-journald[242]: Journal stopped Jun 25 14:16:43.440573 systemd-journald[242]: Received SIGTERM from PID 1 (systemd). Jun 25 14:16:43.440762 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jun 25 14:16:43.440812 kernel: SELinux: the above unknown classes and permissions will be allowed Jun 25 14:16:43.440844 kernel: SELinux: policy capability network_peer_controls=1 Jun 25 14:16:43.440876 kernel: SELinux: policy capability open_perms=1 Jun 25 14:16:43.440906 kernel: SELinux: policy capability extended_socket_class=1 Jun 25 14:16:43.440940 kernel: SELinux: policy capability always_check_network=0 Jun 25 14:16:43.440972 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 25 14:16:43.441011 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 25 14:16:43.441045 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 25 14:16:43.441077 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 25 14:16:43.441112 systemd[1]: Successfully loaded SELinux policy in 104.019ms. Jun 25 14:16:43.441157 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.072ms. Jun 25 14:16:43.441191 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jun 25 14:16:43.441227 systemd[1]: Detected virtualization amazon. Jun 25 14:16:43.441262 systemd[1]: Detected architecture arm64. Jun 25 14:16:43.441299 systemd[1]: Detected first boot. Jun 25 14:16:43.441334 systemd[1]: Initializing machine ID from VM UUID. Jun 25 14:16:43.441367 systemd[1]: Populated /etc with preset unit settings. Jun 25 14:16:43.441400 systemd[1]: Queued start job for default target multi-user.target. Jun 25 14:16:43.441432 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jun 25 14:16:43.441483 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 25 14:16:43.441525 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 25 14:16:43.441556 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 25 14:16:43.441588 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 25 14:16:43.441652 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 25 14:16:43.441720 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 25 14:16:43.441758 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 25 14:16:43.441791 systemd[1]: Created slice user.slice - User and Session Slice. Jun 25 14:16:43.441854 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 25 14:16:43.441886 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 25 14:16:43.441928 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 25 14:16:43.441961 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 25 14:16:43.441999 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 25 14:16:43.442029 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 25 14:16:43.442061 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 25 14:16:43.442091 systemd[1]: Reached target slices.target - Slice Units. Jun 25 14:16:43.442124 systemd[1]: Reached target swap.target - Swaps. Jun 25 14:16:43.442153 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 25 14:16:43.442234 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 25 14:16:43.442269 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jun 25 14:16:43.442303 kernel: kauditd_printk_skb: 45 callbacks suppressed Jun 25 14:16:43.442338 kernel: audit: type=1400 audit(1719325003.033:87): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jun 25 14:16:43.442368 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jun 25 14:16:43.442400 kernel: audit: type=1335 audit(1719325003.033:88): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jun 25 14:16:43.442429 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 25 14:16:43.442459 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jun 25 14:16:43.442488 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 25 14:16:43.442524 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 25 14:16:43.442555 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 25 14:16:43.442584 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 25 14:16:43.442639 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 25 14:16:43.442676 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 25 14:16:43.442709 systemd[1]: Mounting media.mount - External Media Directory... Jun 25 14:16:43.442739 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 25 14:16:43.442769 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 25 14:16:43.442810 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 25 14:16:43.442848 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 25 14:16:43.442882 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:16:43.442912 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 25 14:16:43.442943 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 25 14:16:43.442972 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:16:43.443002 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:16:43.443032 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:16:43.443063 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 25 14:16:43.443097 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:16:43.443131 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 25 14:16:43.443166 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jun 25 14:16:43.443196 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Jun 25 14:16:43.443225 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 25 14:16:43.443265 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 25 14:16:43.443295 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 25 14:16:43.443325 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 25 14:16:43.443355 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 25 14:16:43.443387 kernel: loop: module loaded Jun 25 14:16:43.443457 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 25 14:16:43.443490 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 25 14:16:43.443522 systemd[1]: Mounted media.mount - External Media Directory. Jun 25 14:16:43.443554 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 25 14:16:43.443583 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 25 14:16:43.443636 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 25 14:16:43.443713 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 25 14:16:43.443748 kernel: audit: type=1130 audit(1719325003.351:89): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.443809 kernel: ACPI: bus type drm_connector registered Jun 25 14:16:43.443858 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 25 14:16:43.443891 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 25 14:16:43.443924 kernel: audit: type=1130 audit(1719325003.365:90): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.443956 kernel: audit: type=1131 audit(1719325003.369:91): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.443988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:16:43.444037 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:16:43.444093 kernel: audit: type=1130 audit(1719325003.379:92): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.444167 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:16:43.444205 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:16:43.444239 kernel: audit: type=1131 audit(1719325003.383:93): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.444293 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:16:43.444327 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:16:43.444359 kernel: audit: type=1130 audit(1719325003.395:94): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.444388 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:16:43.444417 kernel: audit: type=1131 audit(1719325003.395:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.444453 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:16:43.444483 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 25 14:16:43.444518 kernel: audit: type=1130 audit(1719325003.406:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.444547 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 25 14:16:43.444576 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 25 14:16:43.444623 systemd-journald[1531]: Journal started Jun 25 14:16:43.444747 systemd-journald[1531]: Runtime Journal (/run/log/journal/ec20a2aafb301534647df830bf063805) is 8.0M, max 75.3M, 67.3M free. Jun 25 14:16:43.033000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jun 25 14:16:43.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.383000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.406000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.418000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jun 25 14:16:43.418000 audit[1531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffffe559110 a2=4000 a3=1 items=0 ppid=1 pid=1531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:43.418000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jun 25 14:16:43.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.452577 kernel: fuse: init (API version 7.37) Jun 25 14:16:43.452682 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 25 14:16:43.452728 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 25 14:16:43.471018 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 25 14:16:43.471113 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:16:43.488661 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jun 25 14:16:43.492671 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:16:43.505521 systemd[1]: Started systemd-journald.service - Journal Service. Jun 25 14:16:43.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.510329 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 25 14:16:43.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.510842 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 25 14:16:43.513607 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 25 14:16:43.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.516128 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 25 14:16:43.518825 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jun 25 14:16:43.522695 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 25 14:16:43.537957 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 25 14:16:43.548001 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 25 14:16:43.558914 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 25 14:16:43.564354 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 25 14:16:43.584318 systemd-journald[1531]: Time spent on flushing to /var/log/journal/ec20a2aafb301534647df830bf063805 is 75.751ms for 1021 entries. Jun 25 14:16:43.584318 systemd-journald[1531]: System Journal (/var/log/journal/ec20a2aafb301534647df830bf063805) is 8.0M, max 195.6M, 187.6M free. Jun 25 14:16:43.665884 systemd-journald[1531]: Received client request to flush runtime journal. Jun 25 14:16:43.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.638975 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 25 14:16:43.668042 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 25 14:16:43.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.672534 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 25 14:16:43.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.682289 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 25 14:16:43.714377 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 25 14:16:43.714000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.725000 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jun 25 14:16:43.749868 udevadm[1583]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jun 25 14:16:43.777214 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 25 14:16:43.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:43.786904 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 25 14:16:43.837318 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 25 14:16:43.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:44.545378 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 25 14:16:44.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:44.553161 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 25 14:16:44.611542 systemd-udevd[1591]: Using default interface naming scheme 'v252'. Jun 25 14:16:44.683748 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 25 14:16:44.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:44.695887 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 25 14:16:44.707961 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 25 14:16:44.798420 (udev-worker)[1604]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:16:44.810669 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1594) Jun 25 14:16:44.824267 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 25 14:16:44.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:44.838250 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jun 25 14:16:45.000647 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1606) Jun 25 14:16:45.004895 systemd-networkd[1599]: lo: Link UP Jun 25 14:16:45.004916 systemd-networkd[1599]: lo: Gained carrier Jun 25 14:16:45.005888 systemd-networkd[1599]: Enumeration completed Jun 25 14:16:45.011925 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:16:45.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:45.006100 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 25 14:16:45.006105 systemd-networkd[1599]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:16:45.006112 systemd-networkd[1599]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 25 14:16:45.012581 systemd-networkd[1599]: eth0: Link UP Jun 25 14:16:45.013023 systemd-networkd[1599]: eth0: Gained carrier Jun 25 14:16:45.013063 systemd-networkd[1599]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 25 14:16:45.018093 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 25 14:16:45.028263 systemd-networkd[1599]: eth0: DHCPv4 address 172.31.29.41/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jun 25 14:16:45.285691 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jun 25 14:16:45.288824 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jun 25 14:16:45.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:45.299931 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jun 25 14:16:45.336772 lvm[1712]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:16:45.375404 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jun 25 14:16:45.377970 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 25 14:16:45.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:45.390985 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jun 25 14:16:45.400890 lvm[1714]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jun 25 14:16:45.443588 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jun 25 14:16:45.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:45.445909 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 25 14:16:45.447969 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 25 14:16:45.448032 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 25 14:16:45.453844 systemd[1]: Reached target machines.target - Containers. Jun 25 14:16:45.465051 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 25 14:16:45.467774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:16:45.467959 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:16:45.471444 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jun 25 14:16:45.476956 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 25 14:16:45.493957 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jun 25 14:16:45.499549 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 25 14:16:45.502696 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1717 (bootctl) Jun 25 14:16:45.506206 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jun 25 14:16:45.526671 kernel: loop0: detected capacity change from 0 to 51896 Jun 25 14:16:45.545571 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 25 14:16:45.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:45.679246 systemd-fsck[1725]: fsck.fat 4.2 (2021-01-31) Jun 25 14:16:45.679246 systemd-fsck[1725]: /dev/nvme0n1p1: 242 files, 114659/258078 clusters Jun 25 14:16:45.684372 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jun 25 14:16:45.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:45.696916 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 25 14:16:45.697835 systemd[1]: Mounting boot.mount - Boot partition... Jun 25 14:16:45.725880 kernel: loop1: detected capacity change from 0 to 59648 Jun 25 14:16:45.728576 systemd[1]: Mounted boot.mount - Boot partition. Jun 25 14:16:45.771092 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jun 25 14:16:45.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:45.863322 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 25 14:16:45.864712 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jun 25 14:16:45.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:45.907651 kernel: loop2: detected capacity change from 0 to 193208 Jun 25 14:16:46.031672 kernel: loop3: detected capacity change from 0 to 113264 Jun 25 14:16:46.167671 kernel: loop4: detected capacity change from 0 to 51896 Jun 25 14:16:46.190646 kernel: loop5: detected capacity change from 0 to 59648 Jun 25 14:16:46.209871 kernel: loop6: detected capacity change from 0 to 193208 Jun 25 14:16:46.235648 kernel: loop7: detected capacity change from 0 to 113264 Jun 25 14:16:46.248526 (sd-sysext)[1748]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jun 25 14:16:46.250558 (sd-sysext)[1748]: Merged extensions into '/usr'. Jun 25 14:16:46.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:46.254263 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 25 14:16:46.265997 systemd[1]: Starting ensure-sysext.service... Jun 25 14:16:46.274855 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jun 25 14:16:46.310449 systemd-tmpfiles[1751]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jun 25 14:16:46.312967 systemd-tmpfiles[1751]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 25 14:16:46.313677 systemd-tmpfiles[1751]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 25 14:16:46.315508 systemd-tmpfiles[1751]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 25 14:16:46.329090 systemd[1]: Reloading. Jun 25 14:16:46.727092 systemd-networkd[1599]: eth0: Gained IPv6LL Jun 25 14:16:46.758723 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:16:46.855181 ldconfig[1716]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 25 14:16:46.899217 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 25 14:16:46.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:46.902819 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 25 14:16:46.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:46.916363 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jun 25 14:16:46.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:46.930920 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:16:46.937462 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 25 14:16:46.944603 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 25 14:16:46.950930 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 25 14:16:46.967025 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 25 14:16:46.979964 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 25 14:16:46.996501 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:16:46.998000 audit[1843]: SYSTEM_BOOT pid=1843 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jun 25 14:16:47.004290 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 25 14:16:47.010046 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 25 14:16:47.016535 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 25 14:16:47.019189 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:16:47.019584 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:16:47.030544 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:16:47.031034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:16:47.031410 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:16:47.038984 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 25 14:16:47.039378 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 25 14:16:47.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:47.040000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:47.044463 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 25 14:16:47.051979 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 25 14:16:47.054364 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 25 14:16:47.054568 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:16:47.055640 systemd[1]: Finished ensure-sysext.service. Jun 25 14:16:47.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:47.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:47.064231 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 25 14:16:47.081549 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 25 14:16:47.081971 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 25 14:16:47.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:47.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:47.098505 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 25 14:16:47.098916 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 25 14:16:47.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:47.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:47.101936 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 25 14:16:47.108255 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 25 14:16:47.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:47.111400 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 25 14:16:47.111831 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 25 14:16:47.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:47.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:47.114531 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 25 14:16:47.124037 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 25 14:16:47.168361 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 25 14:16:47.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:47.185000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jun 25 14:16:47.185000 audit[1866]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc9304af0 a2=420 a3=0 items=0 ppid=1832 pid=1866 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:47.185000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jun 25 14:16:47.187168 augenrules[1866]: No rules Jun 25 14:16:47.189165 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:16:47.216859 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 25 14:16:47.219950 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 25 14:16:47.287095 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 25 14:16:47.289643 systemd[1]: Reached target time-set.target - System Time Set. Jun 25 14:16:47.299532 systemd-resolved[1836]: Positive Trust Anchors: Jun 25 14:16:47.299574 systemd-resolved[1836]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 25 14:16:47.299654 systemd-resolved[1836]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jun 25 14:16:47.308047 systemd-resolved[1836]: Defaulting to hostname 'linux'. Jun 25 14:16:47.311375 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 25 14:16:47.313604 systemd[1]: Reached target network.target - Network. Jun 25 14:16:47.315302 systemd[1]: Reached target network-online.target - Network is Online. Jun 25 14:16:47.317243 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 25 14:16:47.319227 systemd[1]: Reached target sysinit.target - System Initialization. Jun 25 14:16:47.321277 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 25 14:16:47.323307 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 25 14:16:47.325567 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 25 14:16:47.327797 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 25 14:16:47.329796 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 25 14:16:47.331807 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 25 14:16:47.331877 systemd[1]: Reached target paths.target - Path Units. Jun 25 14:16:47.333502 systemd[1]: Reached target timers.target - Timer Units. Jun 25 14:16:47.337716 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 25 14:16:47.342444 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 25 14:16:47.346326 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 25 14:16:47.348705 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:16:47.350854 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 25 14:16:47.353112 systemd[1]: Reached target sockets.target - Socket Units. Jun 25 14:16:47.355077 systemd[1]: Reached target basic.target - Basic System. Jun 25 14:16:47.357311 systemd[1]: System is tainted: cgroupsv1 Jun 25 14:16:47.357403 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:16:47.357458 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 25 14:16:47.367948 systemd[1]: Starting containerd.service - containerd container runtime... Jun 25 14:16:47.374114 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jun 25 14:16:47.390942 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 25 14:16:47.397369 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 25 14:16:47.404328 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 25 14:16:47.414454 jq[1881]: false Jun 25 14:16:47.408180 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 25 14:16:47.420885 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:16:47.426523 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 25 14:16:47.433491 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 25 14:16:47.444841 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 25 14:16:47.453233 systemd[1]: Starting setup-oem.service - Setup OEM... Jun 25 14:16:47.464189 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 25 14:16:47.472133 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 25 14:16:47.486492 extend-filesystems[1882]: Found loop4 Jun 25 14:16:47.490659 extend-filesystems[1882]: Found loop5 Jun 25 14:16:47.492570 extend-filesystems[1882]: Found loop6 Jun 25 14:16:47.499826 extend-filesystems[1882]: Found loop7 Jun 25 14:16:47.501452 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 25 14:16:47.503476 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jun 25 14:16:47.503670 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 25 14:16:47.505546 extend-filesystems[1882]: Found nvme0n1 Jun 25 14:16:47.507420 extend-filesystems[1882]: Found nvme0n1p1 Jun 25 14:16:47.508874 systemd[1]: Starting update-engine.service - Update Engine... Jun 25 14:16:47.513165 extend-filesystems[1882]: Found nvme0n1p2 Jun 25 14:16:47.515844 extend-filesystems[1882]: Found nvme0n1p3 Jun 25 14:16:47.518968 extend-filesystems[1882]: Found usr Jun 25 14:16:47.521236 extend-filesystems[1882]: Found nvme0n1p4 Jun 25 14:16:47.525046 dbus-daemon[1880]: [system] SELinux support is enabled Jun 25 14:16:47.528169 extend-filesystems[1882]: Found nvme0n1p6 Jun 25 14:16:47.530057 extend-filesystems[1882]: Found nvme0n1p7 Jun 25 14:16:47.532027 extend-filesystems[1882]: Found nvme0n1p9 Jun 25 14:16:47.533919 extend-filesystems[1882]: Checking size of /dev/nvme0n1p9 Jun 25 14:16:47.536684 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 25 14:16:47.546858 jq[1901]: true Jun 25 14:16:47.547485 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 25 14:16:47.548164 dbus-daemon[1880]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1599 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jun 25 14:16:47.571860 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 25 14:16:47.572437 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 25 14:16:47.583550 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 25 14:16:47.588209 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 25 14:16:47.603762 jq[1910]: true Jun 25 14:16:47.604267 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 25 14:16:47.604337 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 25 14:16:47.606632 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 25 14:16:47.609990 dbus-daemon[1880]: [system] Successfully activated service 'org.freedesktop.systemd1' Jun 25 14:16:47.606697 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 25 14:16:47.626527 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jun 25 14:16:47.627312 systemd-timesyncd[1839]: Contacted time server 5.161.184.148:123 (0.flatcar.pool.ntp.org). Jun 25 14:16:47.627406 systemd-timesyncd[1839]: Initial clock synchronization to Tue 2024-06-25 14:16:47.386968 UTC. Jun 25 14:16:47.642112 update_engine[1899]: I0625 14:16:47.641997 1899 main.cc:92] Flatcar Update Engine starting Jun 25 14:16:47.684013 systemd[1]: Started update-engine.service - Update Engine. Jun 25 14:16:47.687457 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 25 14:16:47.692884 update_engine[1899]: I0625 14:16:47.692394 1899 update_check_scheduler.cc:74] Next update check in 7m50s Jun 25 14:16:47.690106 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 25 14:16:47.729178 systemd[1]: Finished setup-oem.service - Setup OEM. Jun 25 14:16:47.734425 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jun 25 14:16:47.756102 tar[1907]: linux-arm64/helm Jun 25 14:16:47.781605 extend-filesystems[1882]: Resized partition /dev/nvme0n1p9 Jun 25 14:16:47.795774 extend-filesystems[1947]: resize2fs 1.47.0 (5-Feb-2023) Jun 25 14:16:47.802397 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 25 14:16:47.834792 systemd[1]: motdgen.service: Deactivated successfully. Jun 25 14:16:47.835571 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 25 14:16:47.844655 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 1489915 blocks Jun 25 14:16:47.973665 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 1489915 Jun 25 14:16:48.006925 bash[1953]: Updated "/home/core/.ssh/authorized_keys" Jun 25 14:16:48.010391 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 25 14:16:48.019847 extend-filesystems[1947]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jun 25 14:16:48.019847 extend-filesystems[1947]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 25 14:16:48.019847 extend-filesystems[1947]: The filesystem on /dev/nvme0n1p9 is now 1489915 (4k) blocks long. Jun 25 14:16:48.041803 extend-filesystems[1882]: Resized filesystem in /dev/nvme0n1p9 Jun 25 14:16:48.064217 systemd[1]: Starting sshkeys.service... Jun 25 14:16:48.066529 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 25 14:16:48.067074 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 25 14:16:48.104675 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jun 25 14:16:48.114277 amazon-ssm-agent[1946]: Initializing new seelog logger Jun 25 14:16:48.114277 amazon-ssm-agent[1946]: New Seelog Logger Creation Complete Jun 25 14:16:48.114277 amazon-ssm-agent[1946]: 2024/06/25 14:16:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:48.114277 amazon-ssm-agent[1946]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:48.114277 amazon-ssm-agent[1946]: 2024/06/25 14:16:48 processing appconfig overrides Jun 25 14:16:48.114277 amazon-ssm-agent[1946]: 2024/06/25 14:16:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:48.114277 amazon-ssm-agent[1946]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:48.114277 amazon-ssm-agent[1946]: 2024/06/25 14:16:48 processing appconfig overrides Jun 25 14:16:48.114277 amazon-ssm-agent[1946]: 2024/06/25 14:16:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:48.114277 amazon-ssm-agent[1946]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:48.114277 amazon-ssm-agent[1946]: 2024/06/25 14:16:48 processing appconfig overrides Jun 25 14:16:48.114277 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO Proxy environment variables: Jun 25 14:16:48.152896 amazon-ssm-agent[1946]: 2024/06/25 14:16:48 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:48.152896 amazon-ssm-agent[1946]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jun 25 14:16:48.152896 amazon-ssm-agent[1946]: 2024/06/25 14:16:48 processing appconfig overrides Jun 25 14:16:48.149251 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jun 25 14:16:48.216037 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO https_proxy: Jun 25 14:16:48.275673 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (1985) Jun 25 14:16:48.316513 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO http_proxy: Jun 25 14:16:48.414361 systemd-logind[1895]: Watching system buttons on /dev/input/event0 (Power Button) Jun 25 14:16:48.414417 systemd-logind[1895]: Watching system buttons on /dev/input/event1 (Sleep Button) Jun 25 14:16:48.424701 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO no_proxy: Jun 25 14:16:48.426794 systemd-logind[1895]: New seat seat0. Jun 25 14:16:48.445906 systemd[1]: Started systemd-logind.service - User Login Management. Jun 25 14:16:48.524799 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO Checking if agent identity type OnPrem can be assumed Jun 25 14:16:48.613811 dbus-daemon[1880]: [system] Successfully activated service 'org.freedesktop.hostname1' Jun 25 14:16:48.614066 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jun 25 14:16:48.618067 dbus-daemon[1880]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1918 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jun 25 14:16:48.636953 locksmithd[1934]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 25 14:16:48.683412 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO Checking if agent identity type EC2 can be assumed Jun 25 14:16:48.680370 systemd[1]: Starting polkit.service - Authorization Manager... Jun 25 14:16:48.721327 polkitd[2046]: Started polkitd version 121 Jun 25 14:16:48.741590 coreos-metadata[1879]: Jun 25 14:16:48.741 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 14:16:48.742152 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO Agent will take identity from EC2 Jun 25 14:16:48.745931 coreos-metadata[1879]: Jun 25 14:16:48.745 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jun 25 14:16:48.747942 coreos-metadata[1879]: Jun 25 14:16:48.747 INFO Fetch successful Jun 25 14:16:48.747942 coreos-metadata[1879]: Jun 25 14:16:48.747 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jun 25 14:16:48.749921 coreos-metadata[1879]: Jun 25 14:16:48.749 INFO Fetch successful Jun 25 14:16:48.749921 coreos-metadata[1879]: Jun 25 14:16:48.749 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jun 25 14:16:48.752070 coreos-metadata[1879]: Jun 25 14:16:48.751 INFO Fetch successful Jun 25 14:16:48.752070 coreos-metadata[1879]: Jun 25 14:16:48.751 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jun 25 14:16:48.754521 coreos-metadata[1879]: Jun 25 14:16:48.754 INFO Fetch successful Jun 25 14:16:48.754521 coreos-metadata[1879]: Jun 25 14:16:48.754 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jun 25 14:16:48.758940 coreos-metadata[1879]: Jun 25 14:16:48.758 INFO Fetch failed with 404: resource not found Jun 25 14:16:48.758940 coreos-metadata[1879]: Jun 25 14:16:48.758 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jun 25 14:16:48.761919 coreos-metadata[1879]: Jun 25 14:16:48.761 INFO Fetch successful Jun 25 14:16:48.761919 coreos-metadata[1879]: Jun 25 14:16:48.761 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jun 25 14:16:48.763831 coreos-metadata[1879]: Jun 25 14:16:48.763 INFO Fetch successful Jun 25 14:16:48.763831 coreos-metadata[1879]: Jun 25 14:16:48.763 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jun 25 14:16:48.767531 coreos-metadata[1879]: Jun 25 14:16:48.767 INFO Fetch successful Jun 25 14:16:48.767531 coreos-metadata[1879]: Jun 25 14:16:48.767 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jun 25 14:16:48.768825 coreos-metadata[1879]: Jun 25 14:16:48.768 INFO Fetch successful Jun 25 14:16:48.768825 coreos-metadata[1879]: Jun 25 14:16:48.768 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jun 25 14:16:48.769984 coreos-metadata[1879]: Jun 25 14:16:48.769 INFO Fetch successful Jun 25 14:16:48.778117 polkitd[2046]: Loading rules from directory /etc/polkit-1/rules.d Jun 25 14:16:48.778257 polkitd[2046]: Loading rules from directory /usr/share/polkit-1/rules.d Jun 25 14:16:48.792028 polkitd[2046]: Finished loading, compiling and executing 2 rules Jun 25 14:16:48.793743 dbus-daemon[1880]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jun 25 14:16:48.796286 polkitd[2046]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jun 25 14:16:48.841086 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 14:16:48.853440 containerd[1911]: time="2024-06-25T14:16:48.847076481Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jun 25 14:16:48.851799 systemd[1]: Started polkit.service - Authorization Manager. Jun 25 14:16:48.866599 systemd-hostnamed[1918]: Hostname set to (transient) Jun 25 14:16:48.867022 systemd-resolved[1836]: System hostname changed to 'ip-172-31-29-41'. Jun 25 14:16:48.895175 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jun 25 14:16:48.898197 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 25 14:16:48.944694 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 14:16:49.039752 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO [amazon-ssm-agent] using named pipe channel for IPC Jun 25 14:16:49.125602 containerd[1911]: time="2024-06-25T14:16:49.125490176Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jun 25 14:16:49.125602 containerd[1911]: time="2024-06-25T14:16:49.125621280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:49.138976 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jun 25 14:16:49.169067 containerd[1911]: time="2024-06-25T14:16:49.132816831Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:16:49.169227 containerd[1911]: time="2024-06-25T14:16:49.169060759Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:49.169670 containerd[1911]: time="2024-06-25T14:16:49.169576036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:16:49.169670 containerd[1911]: time="2024-06-25T14:16:49.169662223Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jun 25 14:16:49.169902 containerd[1911]: time="2024-06-25T14:16:49.169860163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:49.170025 containerd[1911]: time="2024-06-25T14:16:49.169985822Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:16:49.170089 containerd[1911]: time="2024-06-25T14:16:49.170023073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:49.170236 containerd[1911]: time="2024-06-25T14:16:49.170178060Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:49.170706 containerd[1911]: time="2024-06-25T14:16:49.170655805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:49.170833 containerd[1911]: time="2024-06-25T14:16:49.170713096Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jun 25 14:16:49.170833 containerd[1911]: time="2024-06-25T14:16:49.170740614Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jun 25 14:16:49.171093 containerd[1911]: time="2024-06-25T14:16:49.171044980Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jun 25 14:16:49.171172 containerd[1911]: time="2024-06-25T14:16:49.171088891Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jun 25 14:16:49.171236 containerd[1911]: time="2024-06-25T14:16:49.171212353Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jun 25 14:16:49.171297 containerd[1911]: time="2024-06-25T14:16:49.171240525Z" level=info msg="metadata content store policy set" policy=shared Jun 25 14:16:49.186758 containerd[1911]: time="2024-06-25T14:16:49.186488078Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jun 25 14:16:49.186758 containerd[1911]: time="2024-06-25T14:16:49.186568633Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jun 25 14:16:49.186758 containerd[1911]: time="2024-06-25T14:16:49.186739979Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jun 25 14:16:49.186985 containerd[1911]: time="2024-06-25T14:16:49.186807412Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jun 25 14:16:49.186985 containerd[1911]: time="2024-06-25T14:16:49.186841941Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jun 25 14:16:49.186985 containerd[1911]: time="2024-06-25T14:16:49.186866935Z" level=info msg="NRI interface is disabled by configuration." Jun 25 14:16:49.186985 containerd[1911]: time="2024-06-25T14:16:49.186895083Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jun 25 14:16:49.187252 containerd[1911]: time="2024-06-25T14:16:49.187193023Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jun 25 14:16:49.187323 containerd[1911]: time="2024-06-25T14:16:49.187254613Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jun 25 14:16:49.187323 containerd[1911]: time="2024-06-25T14:16:49.187287132Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jun 25 14:16:49.187420 containerd[1911]: time="2024-06-25T14:16:49.187322035Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jun 25 14:16:49.187420 containerd[1911]: time="2024-06-25T14:16:49.187354343Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jun 25 14:16:49.187420 containerd[1911]: time="2024-06-25T14:16:49.187393429Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jun 25 14:16:49.187567 containerd[1911]: time="2024-06-25T14:16:49.187425328Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jun 25 14:16:49.187567 containerd[1911]: time="2024-06-25T14:16:49.187454599Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jun 25 14:16:49.187567 containerd[1911]: time="2024-06-25T14:16:49.187487527Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jun 25 14:16:49.187742 containerd[1911]: time="2024-06-25T14:16:49.187517662Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jun 25 14:16:49.187742 containerd[1911]: time="2024-06-25T14:16:49.187631659Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jun 25 14:16:49.187742 containerd[1911]: time="2024-06-25T14:16:49.187664131Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jun 25 14:16:49.187930 containerd[1911]: time="2024-06-25T14:16:49.187888993Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jun 25 14:16:49.188547 containerd[1911]: time="2024-06-25T14:16:49.188495411Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jun 25 14:16:49.188835 containerd[1911]: time="2024-06-25T14:16:49.188568769Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.188835 containerd[1911]: time="2024-06-25T14:16:49.188635653Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jun 25 14:16:49.188835 containerd[1911]: time="2024-06-25T14:16:49.188687486Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jun 25 14:16:49.189022 containerd[1911]: time="2024-06-25T14:16:49.188849426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189022 containerd[1911]: time="2024-06-25T14:16:49.188883476Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189022 containerd[1911]: time="2024-06-25T14:16:49.188916216Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189022 containerd[1911]: time="2024-06-25T14:16:49.188944716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189022 containerd[1911]: time="2024-06-25T14:16:49.188973776Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189022 containerd[1911]: time="2024-06-25T14:16:49.189003572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189300 containerd[1911]: time="2024-06-25T14:16:49.189032048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189300 containerd[1911]: time="2024-06-25T14:16:49.189060442Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189300 containerd[1911]: time="2024-06-25T14:16:49.189091010Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jun 25 14:16:49.189439 containerd[1911]: time="2024-06-25T14:16:49.189360777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189439 containerd[1911]: time="2024-06-25T14:16:49.189396941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189543 containerd[1911]: time="2024-06-25T14:16:49.189437604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189543 containerd[1911]: time="2024-06-25T14:16:49.189468546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189543 containerd[1911]: time="2024-06-25T14:16:49.189499966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189543 containerd[1911]: time="2024-06-25T14:16:49.189532134Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189783 containerd[1911]: time="2024-06-25T14:16:49.189562293Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.189783 containerd[1911]: time="2024-06-25T14:16:49.189602091Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jun 25 14:16:49.190149 containerd[1911]: time="2024-06-25T14:16:49.190035493Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jun 25 14:16:49.190474 containerd[1911]: time="2024-06-25T14:16:49.190148660Z" level=info msg="Connect containerd service" Jun 25 14:16:49.190474 containerd[1911]: time="2024-06-25T14:16:49.190211454Z" level=info msg="using legacy CRI server" Jun 25 14:16:49.190474 containerd[1911]: time="2024-06-25T14:16:49.190232756Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 25 14:16:49.190474 containerd[1911]: time="2024-06-25T14:16:49.190314888Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jun 25 14:16:49.191352 containerd[1911]: time="2024-06-25T14:16:49.191289495Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 14:16:49.191478 containerd[1911]: time="2024-06-25T14:16:49.191378112Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jun 25 14:16:49.191478 containerd[1911]: time="2024-06-25T14:16:49.191414066Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jun 25 14:16:49.191478 containerd[1911]: time="2024-06-25T14:16:49.191442332Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jun 25 14:16:49.191478 containerd[1911]: time="2024-06-25T14:16:49.191470586Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jun 25 14:16:49.192085 containerd[1911]: time="2024-06-25T14:16:49.192041225Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 25 14:16:49.192181 containerd[1911]: time="2024-06-25T14:16:49.192146856Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 25 14:16:49.192715 containerd[1911]: time="2024-06-25T14:16:49.192296690Z" level=info msg="Start subscribing containerd event" Jun 25 14:16:49.192786 containerd[1911]: time="2024-06-25T14:16:49.192734450Z" level=info msg="Start recovering state" Jun 25 14:16:49.192885 containerd[1911]: time="2024-06-25T14:16:49.192849417Z" level=info msg="Start event monitor" Jun 25 14:16:49.192949 containerd[1911]: time="2024-06-25T14:16:49.192882192Z" level=info msg="Start snapshots syncer" Jun 25 14:16:49.192949 containerd[1911]: time="2024-06-25T14:16:49.192906158Z" level=info msg="Start cni network conf syncer for default" Jun 25 14:16:49.192949 containerd[1911]: time="2024-06-25T14:16:49.192929434Z" level=info msg="Start streaming server" Jun 25 14:16:49.197914 containerd[1911]: time="2024-06-25T14:16:49.193061250Z" level=info msg="containerd successfully booted in 0.370689s" Jun 25 14:16:49.193210 systemd[1]: Started containerd.service - containerd container runtime. Jun 25 14:16:49.232746 coreos-metadata[1972]: Jun 25 14:16:49.232 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jun 25 14:16:49.238760 coreos-metadata[1972]: Jun 25 14:16:49.238 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jun 25 14:16:49.239240 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jun 25 14:16:49.240101 coreos-metadata[1972]: Jun 25 14:16:49.239 INFO Fetch successful Jun 25 14:16:49.240101 coreos-metadata[1972]: Jun 25 14:16:49.240 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jun 25 14:16:49.242863 coreos-metadata[1972]: Jun 25 14:16:49.242 INFO Fetch successful Jun 25 14:16:49.249108 unknown[1972]: wrote ssh authorized keys file for user: core Jun 25 14:16:49.312298 update-ssh-keys[2106]: Updated "/home/core/.ssh/authorized_keys" Jun 25 14:16:49.313355 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jun 25 14:16:49.317970 systemd[1]: Finished sshkeys.service. Jun 25 14:16:49.341209 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO [amazon-ssm-agent] Starting Core Agent Jun 25 14:16:49.441704 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jun 25 14:16:49.541851 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO [Registrar] Starting registrar module Jun 25 14:16:49.570819 amazon-ssm-agent[1946]: 2024-06-25 14:16:48 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jun 25 14:16:49.570962 amazon-ssm-agent[1946]: 2024-06-25 14:16:49 INFO [EC2Identity] EC2 registration was successful. Jun 25 14:16:49.570962 amazon-ssm-agent[1946]: 2024-06-25 14:16:49 INFO [CredentialRefresher] credentialRefresher has started Jun 25 14:16:49.570962 amazon-ssm-agent[1946]: 2024-06-25 14:16:49 INFO [CredentialRefresher] Starting credentials refresher loop Jun 25 14:16:49.570962 amazon-ssm-agent[1946]: 2024-06-25 14:16:49 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jun 25 14:16:49.642286 amazon-ssm-agent[1946]: 2024-06-25 14:16:49 INFO [CredentialRefresher] Next credential rotation will be in 30.20832004731667 minutes Jun 25 14:16:49.944955 tar[1907]: linux-arm64/LICENSE Jun 25 14:16:49.945705 tar[1907]: linux-arm64/README.md Jun 25 14:16:49.963231 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 25 14:16:50.445931 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:16:50.611156 amazon-ssm-agent[1946]: 2024-06-25 14:16:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jun 25 14:16:50.713065 amazon-ssm-agent[1946]: 2024-06-25 14:16:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2128) started Jun 25 14:16:50.813710 amazon-ssm-agent[1946]: 2024-06-25 14:16:50 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jun 25 14:16:51.968572 kubelet[2126]: E0625 14:16:51.968368 2126 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:16:51.972656 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:16:51.973054 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:16:52.091526 sshd_keygen[1952]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 25 14:16:52.132496 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 25 14:16:52.146337 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 25 14:16:52.158941 systemd[1]: issuegen.service: Deactivated successfully. Jun 25 14:16:52.159494 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 25 14:16:52.170699 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 25 14:16:52.192746 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 25 14:16:52.202483 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 25 14:16:52.215467 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 25 14:16:52.218529 systemd[1]: Reached target getty.target - Login Prompts. Jun 25 14:16:52.221148 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 25 14:16:52.235513 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jun 25 14:16:52.250645 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jun 25 14:16:52.251204 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jun 25 14:16:52.255828 systemd[1]: Startup finished in 8.924s (kernel) + 10.716s (userspace) = 19.641s. Jun 25 14:16:55.887418 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 25 14:16:55.897308 systemd[1]: Started sshd@0-172.31.29.41:22-139.178.68.195:35738.service - OpenSSH per-connection server daemon (139.178.68.195:35738). Jun 25 14:16:56.089367 sshd[2162]: Accepted publickey for core from 139.178.68.195 port 35738 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:56.093189 sshd[2162]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:56.108436 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 25 14:16:56.116170 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 25 14:16:56.123956 systemd-logind[1895]: New session 1 of user core. Jun 25 14:16:56.143694 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 25 14:16:56.151884 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 25 14:16:56.157932 (systemd)[2167]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:56.346970 systemd[2167]: Queued start job for default target default.target. Jun 25 14:16:56.347367 systemd[2167]: Reached target paths.target - Paths. Jun 25 14:16:56.347403 systemd[2167]: Reached target sockets.target - Sockets. Jun 25 14:16:56.347433 systemd[2167]: Reached target timers.target - Timers. Jun 25 14:16:56.347460 systemd[2167]: Reached target basic.target - Basic System. Jun 25 14:16:56.347681 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 25 14:16:56.349783 systemd[2167]: Reached target default.target - Main User Target. Jun 25 14:16:56.349891 systemd[2167]: Startup finished in 177ms. Jun 25 14:16:56.356193 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 25 14:16:56.508150 systemd[1]: Started sshd@1-172.31.29.41:22-139.178.68.195:35740.service - OpenSSH per-connection server daemon (139.178.68.195:35740). Jun 25 14:16:56.679094 sshd[2176]: Accepted publickey for core from 139.178.68.195 port 35740 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:56.682314 sshd[2176]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:56.690626 systemd-logind[1895]: New session 2 of user core. Jun 25 14:16:56.698134 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 25 14:16:56.829006 sshd[2176]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:56.835021 systemd[1]: sshd@1-172.31.29.41:22-139.178.68.195:35740.service: Deactivated successfully. Jun 25 14:16:56.836934 systemd[1]: session-2.scope: Deactivated successfully. Jun 25 14:16:56.837535 systemd-logind[1895]: Session 2 logged out. Waiting for processes to exit. Jun 25 14:16:56.839315 systemd-logind[1895]: Removed session 2. Jun 25 14:16:56.861210 systemd[1]: Started sshd@2-172.31.29.41:22-139.178.68.195:35744.service - OpenSSH per-connection server daemon (139.178.68.195:35744). Jun 25 14:16:57.025775 sshd[2183]: Accepted publickey for core from 139.178.68.195 port 35744 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:57.028838 sshd[2183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:57.037262 systemd-logind[1895]: New session 3 of user core. Jun 25 14:16:57.043397 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 25 14:16:57.168378 sshd[2183]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:57.173684 systemd-logind[1895]: Session 3 logged out. Waiting for processes to exit. Jun 25 14:16:57.175141 systemd[1]: sshd@2-172.31.29.41:22-139.178.68.195:35744.service: Deactivated successfully. Jun 25 14:16:57.176479 systemd[1]: session-3.scope: Deactivated successfully. Jun 25 14:16:57.179150 systemd-logind[1895]: Removed session 3. Jun 25 14:16:57.197267 systemd[1]: Started sshd@3-172.31.29.41:22-139.178.68.195:35756.service - OpenSSH per-connection server daemon (139.178.68.195:35756). Jun 25 14:16:57.371917 sshd[2190]: Accepted publickey for core from 139.178.68.195 port 35756 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:57.375014 sshd[2190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:57.383345 systemd-logind[1895]: New session 4 of user core. Jun 25 14:16:57.390266 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 25 14:16:57.526545 sshd[2190]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:57.533258 systemd[1]: sshd@3-172.31.29.41:22-139.178.68.195:35756.service: Deactivated successfully. Jun 25 14:16:57.534902 systemd[1]: session-4.scope: Deactivated successfully. Jun 25 14:16:57.535941 systemd-logind[1895]: Session 4 logged out. Waiting for processes to exit. Jun 25 14:16:57.539789 systemd-logind[1895]: Removed session 4. Jun 25 14:16:57.558399 systemd[1]: Started sshd@4-172.31.29.41:22-139.178.68.195:35764.service - OpenSSH per-connection server daemon (139.178.68.195:35764). Jun 25 14:16:57.726597 sshd[2197]: Accepted publickey for core from 139.178.68.195 port 35764 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:57.729692 sshd[2197]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:57.739862 systemd-logind[1895]: New session 5 of user core. Jun 25 14:16:57.747422 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 25 14:16:58.005726 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 25 14:16:58.006328 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:16:58.025508 sudo[2201]: pam_unix(sudo:session): session closed for user root Jun 25 14:16:58.049028 sshd[2197]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:58.054916 systemd[1]: sshd@4-172.31.29.41:22-139.178.68.195:35764.service: Deactivated successfully. Jun 25 14:16:58.056498 systemd[1]: session-5.scope: Deactivated successfully. Jun 25 14:16:58.059585 systemd-logind[1895]: Session 5 logged out. Waiting for processes to exit. Jun 25 14:16:58.062163 systemd-logind[1895]: Removed session 5. Jun 25 14:16:58.078286 systemd[1]: Started sshd@5-172.31.29.41:22-139.178.68.195:47668.service - OpenSSH per-connection server daemon (139.178.68.195:47668). Jun 25 14:16:58.246386 sshd[2205]: Accepted publickey for core from 139.178.68.195 port 47668 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:58.249569 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:58.257981 systemd-logind[1895]: New session 6 of user core. Jun 25 14:16:58.264176 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 25 14:16:58.371928 sudo[2210]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 25 14:16:58.372553 sudo[2210]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:16:58.378762 sudo[2210]: pam_unix(sudo:session): session closed for user root Jun 25 14:16:58.389769 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jun 25 14:16:58.390351 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:16:58.415260 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jun 25 14:16:58.416000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:16:58.418914 kernel: kauditd_printk_skb: 50 callbacks suppressed Jun 25 14:16:58.418972 kernel: audit: type=1305 audit(1719325018.416:143): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jun 25 14:16:58.419430 auditctl[2213]: No rules Jun 25 14:16:58.416000 audit[2213]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd953ed40 a2=420 a3=0 items=0 ppid=1 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:58.420805 systemd[1]: audit-rules.service: Deactivated successfully. Jun 25 14:16:58.421291 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jun 25 14:16:58.425909 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jun 25 14:16:58.426337 kernel: audit: type=1300 audit(1719325018.416:143): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd953ed40 a2=420 a3=0 items=0 ppid=1 pid=2213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:58.437483 kernel: audit: type=1327 audit(1719325018.416:143): proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:16:58.437592 kernel: audit: type=1131 audit(1719325018.420:144): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:58.416000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Jun 25 14:16:58.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:58.475507 augenrules[2231]: No rules Jun 25 14:16:58.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:58.483000 audit[2209]: USER_END pid=2209 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:58.477774 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jun 25 14:16:58.483900 sudo[2209]: pam_unix(sudo:session): session closed for user root Jun 25 14:16:58.489349 kernel: audit: type=1130 audit(1719325018.477:145): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:58.489463 kernel: audit: type=1106 audit(1719325018.483:146): pid=2209 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:58.483000 audit[2209]: CRED_DISP pid=2209 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:58.493510 kernel: audit: type=1104 audit(1719325018.483:147): pid=2209 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:58.513405 sshd[2205]: pam_unix(sshd:session): session closed for user core Jun 25 14:16:58.514000 audit[2205]: USER_END pid=2205 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:58.520332 systemd[1]: sshd@5-172.31.29.41:22-139.178.68.195:47668.service: Deactivated successfully. Jun 25 14:16:58.514000 audit[2205]: CRED_DISP pid=2205 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:58.522702 systemd[1]: session-6.scope: Deactivated successfully. Jun 25 14:16:58.526172 kernel: audit: type=1106 audit(1719325018.514:148): pid=2205 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:58.526270 kernel: audit: type=1104 audit(1719325018.514:149): pid=2205 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:58.514000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.29.41:22-139.178.68.195:47668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:58.526509 systemd-logind[1895]: Session 6 logged out. Waiting for processes to exit. Jun 25 14:16:58.530828 kernel: audit: type=1131 audit(1719325018.514:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-172.31.29.41:22-139.178.68.195:47668 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:58.536075 systemd-logind[1895]: Removed session 6. Jun 25 14:16:58.540300 systemd[1]: Started sshd@6-172.31.29.41:22-139.178.68.195:47678.service - OpenSSH per-connection server daemon (139.178.68.195:47678). Jun 25 14:16:58.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.41:22-139.178.68.195:47678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:16:58.706000 audit[2238]: USER_ACCT pid=2238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:58.707832 sshd[2238]: Accepted publickey for core from 139.178.68.195 port 47678 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:16:58.708000 audit[2238]: CRED_ACQ pid=2238 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:58.709000 audit[2238]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe989f940 a2=3 a3=1 items=0 ppid=1 pid=2238 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:16:58.709000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:16:58.710820 sshd[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:16:58.719678 systemd-logind[1895]: New session 7 of user core. Jun 25 14:16:58.723126 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 25 14:16:58.732000 audit[2238]: USER_START pid=2238 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:58.735000 audit[2241]: CRED_ACQ pid=2241 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:16:58.832000 audit[2242]: USER_ACCT pid=2242 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:58.834239 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 25 14:16:58.833000 audit[2242]: CRED_REFR pid=2242 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:58.834977 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jun 25 14:16:58.838000 audit[2242]: USER_START pid=2242 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:16:59.327507 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 25 14:17:00.013405 dockerd[2251]: time="2024-06-25T14:17:00.013325862Z" level=info msg="Starting up" Jun 25 14:17:00.053499 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3436409102-merged.mount: Deactivated successfully. Jun 25 14:17:00.545411 systemd[1]: var-lib-docker-metacopy\x2dcheck1542559542-merged.mount: Deactivated successfully. Jun 25 14:17:00.571292 dockerd[2251]: time="2024-06-25T14:17:00.571193290Z" level=info msg="Loading containers: start." Jun 25 14:17:00.697000 audit[2283]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2283 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.697000 audit[2283]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffe1661140 a2=0 a3=1 items=0 ppid=2251 pid=2283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.697000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jun 25 14:17:00.701000 audit[2285]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2285 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.701000 audit[2285]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffd8156140 a2=0 a3=1 items=0 ppid=2251 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.701000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jun 25 14:17:00.706000 audit[2287]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2287 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.706000 audit[2287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffef18c9f0 a2=0 a3=1 items=0 ppid=2251 pid=2287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.706000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:17:00.711000 audit[2289]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2289 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.711000 audit[2289]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc6bf2df0 a2=0 a3=1 items=0 ppid=2251 pid=2289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.711000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:17:00.719000 audit[2291]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=2291 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.719000 audit[2291]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffee2721c0 a2=0 a3=1 items=0 ppid=2251 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.719000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Jun 25 14:17:00.724000 audit[2293]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=2293 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.724000 audit[2293]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd1c38730 a2=0 a3=1 items=0 ppid=2251 pid=2293 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.724000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Jun 25 14:17:00.744000 audit[2295]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2295 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.744000 audit[2295]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd5b702b0 a2=0 a3=1 items=0 ppid=2251 pid=2295 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.744000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jun 25 14:17:00.751000 audit[2297]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=2297 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.751000 audit[2297]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc5563560 a2=0 a3=1 items=0 ppid=2251 pid=2297 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.751000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jun 25 14:17:00.756000 audit[2299]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=2299 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.756000 audit[2299]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=fffffaabc470 a2=0 a3=1 items=0 ppid=2251 pid=2299 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.756000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:17:00.774000 audit[2303]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=2303 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.774000 audit[2303]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd24bae50 a2=0 a3=1 items=0 ppid=2251 pid=2303 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.774000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:17:00.777000 audit[2304]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2304 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.777000 audit[2304]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffea0010a0 a2=0 a3=1 items=0 ppid=2251 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.777000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:17:00.801662 kernel: Initializing XFRM netlink socket Jun 25 14:17:00.847825 (udev-worker)[2263]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:17:00.871000 audit[2312]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=2312 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.871000 audit[2312]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=fffff69f3050 a2=0 a3=1 items=0 ppid=2251 pid=2312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.871000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jun 25 14:17:00.889000 audit[2315]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=2315 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.889000 audit[2315]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff9eab8e0 a2=0 a3=1 items=0 ppid=2251 pid=2315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.889000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jun 25 14:17:00.901000 audit[2319]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=2319 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.901000 audit[2319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff15cffb0 a2=0 a3=1 items=0 ppid=2251 pid=2319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.901000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Jun 25 14:17:00.906000 audit[2321]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=2321 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.906000 audit[2321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffc7b920b0 a2=0 a3=1 items=0 ppid=2251 pid=2321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.906000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Jun 25 14:17:00.913000 audit[2323]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=2323 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.913000 audit[2323]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffce239cb0 a2=0 a3=1 items=0 ppid=2251 pid=2323 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.913000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jun 25 14:17:00.919000 audit[2325]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=2325 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.919000 audit[2325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffcf6664f0 a2=0 a3=1 items=0 ppid=2251 pid=2325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.919000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jun 25 14:17:00.925000 audit[2327]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=2327 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.925000 audit[2327]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffe7bd1c30 a2=0 a3=1 items=0 ppid=2251 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.925000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Jun 25 14:17:00.940000 audit[2330]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=2330 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.940000 audit[2330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=fffff5a66dc0 a2=0 a3=1 items=0 ppid=2251 pid=2330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.940000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jun 25 14:17:00.946000 audit[2332]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=2332 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.946000 audit[2332]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffc3a90f10 a2=0 a3=1 items=0 ppid=2251 pid=2332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.946000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jun 25 14:17:00.951000 audit[2334]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=2334 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.951000 audit[2334]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffcc01e040 a2=0 a3=1 items=0 ppid=2251 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.951000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jun 25 14:17:00.955000 audit[2336]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=2336 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.955000 audit[2336]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc6ad9cd0 a2=0 a3=1 items=0 ppid=2251 pid=2336 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.955000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jun 25 14:17:00.957179 systemd-networkd[1599]: docker0: Link UP Jun 25 14:17:00.974000 audit[2340]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=2340 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.974000 audit[2340]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe33f3bf0 a2=0 a3=1 items=0 ppid=2251 pid=2340 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.974000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:17:00.977000 audit[2341]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=2341 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:00.977000 audit[2341]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffd7653b0 a2=0 a3=1 items=0 ppid=2251 pid=2341 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:00.977000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jun 25 14:17:00.979557 dockerd[2251]: time="2024-06-25T14:17:00.979486135Z" level=info msg="Loading containers: done." Jun 25 14:17:01.144894 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck375526772-merged.mount: Deactivated successfully. Jun 25 14:17:01.162592 dockerd[2251]: time="2024-06-25T14:17:01.162527908Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 25 14:17:01.163266 dockerd[2251]: time="2024-06-25T14:17:01.162900136Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jun 25 14:17:01.163266 dockerd[2251]: time="2024-06-25T14:17:01.163111531Z" level=info msg="Daemon has completed initialization" Jun 25 14:17:01.215914 dockerd[2251]: time="2024-06-25T14:17:01.215825796Z" level=info msg="API listen on /run/docker.sock" Jun 25 14:17:01.216734 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 25 14:17:01.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:02.163346 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 25 14:17:02.163771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:17:02.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:02.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:02.174254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:17:02.824680 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:17:02.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:02.934647 kubelet[2391]: E0625 14:17:02.934545 2391 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:17:02.942246 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:17:02.942705 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:17:02.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:17:02.983465 containerd[1911]: time="2024-06-25T14:17:02.983380688Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\"" Jun 25 14:17:03.694081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3201799540.mount: Deactivated successfully. Jun 25 14:17:05.467710 containerd[1911]: time="2024-06-25T14:17:05.467632705Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:05.470114 containerd[1911]: time="2024-06-25T14:17:05.470052753Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.11: active requests=0, bytes read=31671538" Jun 25 14:17:05.472235 containerd[1911]: time="2024-06-25T14:17:05.472175585Z" level=info msg="ImageCreate event name:\"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:05.476325 containerd[1911]: time="2024-06-25T14:17:05.476273258Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:05.480265 containerd[1911]: time="2024-06-25T14:17:05.480213274Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:05.482689 containerd[1911]: time="2024-06-25T14:17:05.482583066Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.11\" with image id \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:aec9d1701c304eee8607d728a39baaa511d65bef6dd9861010618f63fbadeb10\", size \"31668338\" in 2.49910583s" Jun 25 14:17:05.482844 containerd[1911]: time="2024-06-25T14:17:05.482692238Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.11\" returns image reference \"sha256:d2b5500cdb8d455434ebcaa569918eb0c5e68e82d75d4c85c509519786f24a8d\"" Jun 25 14:17:05.521126 containerd[1911]: time="2024-06-25T14:17:05.521059310Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\"" Jun 25 14:17:07.332262 containerd[1911]: time="2024-06-25T14:17:07.332176111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:07.334386 containerd[1911]: time="2024-06-25T14:17:07.334324638Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.11: active requests=0, bytes read=28893118" Jun 25 14:17:07.335209 containerd[1911]: time="2024-06-25T14:17:07.335155771Z" level=info msg="ImageCreate event name:\"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:07.339279 containerd[1911]: time="2024-06-25T14:17:07.339216723Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:07.343128 containerd[1911]: time="2024-06-25T14:17:07.343063647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:07.345643 containerd[1911]: time="2024-06-25T14:17:07.345560185Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.11\" with image id \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6014c3572ec683841bbb16f87b94da28ee0254b95e2dba2d1850d62bd0111f09\", size \"30445463\" in 1.824430079s" Jun 25 14:17:07.347704 containerd[1911]: time="2024-06-25T14:17:07.345669030Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.11\" returns image reference \"sha256:24cd2c3bd254238005fcc2fcc15e9e56347b218c10b8399a28d1bf813800266a\"" Jun 25 14:17:07.389043 containerd[1911]: time="2024-06-25T14:17:07.388929668Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\"" Jun 25 14:17:08.701261 containerd[1911]: time="2024-06-25T14:17:08.701199147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:08.703816 containerd[1911]: time="2024-06-25T14:17:08.703743071Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.11: active requests=0, bytes read=15358438" Jun 25 14:17:08.704183 containerd[1911]: time="2024-06-25T14:17:08.704145553Z" level=info msg="ImageCreate event name:\"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:08.708194 containerd[1911]: time="2024-06-25T14:17:08.708144244Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:08.712200 containerd[1911]: time="2024-06-25T14:17:08.712130170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:08.714693 containerd[1911]: time="2024-06-25T14:17:08.714564007Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.11\" with image id \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:46cf7475c8daffb743c856a1aea0ddea35e5acd2418be18b1e22cf98d9c9b445\", size \"16910801\" in 1.325258715s" Jun 25 14:17:08.714926 containerd[1911]: time="2024-06-25T14:17:08.714888256Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.11\" returns image reference \"sha256:fdf13db9a96001adee7d1c69fd6849d6cd45fc3c138c95c8240d353eb79acf50\"" Jun 25 14:17:08.758069 containerd[1911]: time="2024-06-25T14:17:08.757960810Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\"" Jun 25 14:17:10.079704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4225791704.mount: Deactivated successfully. Jun 25 14:17:10.747766 containerd[1911]: time="2024-06-25T14:17:10.747683800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:10.750177 containerd[1911]: time="2024-06-25T14:17:10.750095702Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.11: active requests=0, bytes read=24772461" Jun 25 14:17:10.751951 containerd[1911]: time="2024-06-25T14:17:10.751883829Z" level=info msg="ImageCreate event name:\"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:10.754731 containerd[1911]: time="2024-06-25T14:17:10.754673393Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.28.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:10.757324 containerd[1911]: time="2024-06-25T14:17:10.757268567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:10.758967 containerd[1911]: time="2024-06-25T14:17:10.758902068Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.11\" with image id \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\", repo tag \"registry.k8s.io/kube-proxy:v1.28.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ae4b671d4cfc23dd75030bb4490207cd939b3b11a799bcb4119698cd712eb5b4\", size \"24771480\" in 2.000829391s" Jun 25 14:17:10.759216 containerd[1911]: time="2024-06-25T14:17:10.759173735Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.11\" returns image reference \"sha256:e195d3cf134bc9d64104f5e82e95fce811d55b1cdc9cb26fb8f52c8d107d1661\"" Jun 25 14:17:10.802366 containerd[1911]: time="2024-06-25T14:17:10.802277693Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jun 25 14:17:11.331089 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2176283543.mount: Deactivated successfully. Jun 25 14:17:11.351510 containerd[1911]: time="2024-06-25T14:17:11.351425003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:11.353943 containerd[1911]: time="2024-06-25T14:17:11.353871538Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=268821" Jun 25 14:17:11.356170 containerd[1911]: time="2024-06-25T14:17:11.356104278Z" level=info msg="ImageCreate event name:\"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:11.363030 containerd[1911]: time="2024-06-25T14:17:11.362975561Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:11.366457 containerd[1911]: time="2024-06-25T14:17:11.366401766Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:11.368267 containerd[1911]: time="2024-06-25T14:17:11.368194324Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"268051\" in 565.840752ms" Jun 25 14:17:11.368467 containerd[1911]: time="2024-06-25T14:17:11.368261334Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Jun 25 14:17:11.408898 containerd[1911]: time="2024-06-25T14:17:11.408824381Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jun 25 14:17:12.027642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4208971628.mount: Deactivated successfully. Jun 25 14:17:13.163193 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jun 25 14:17:13.163588 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:17:13.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:13.169099 kernel: kauditd_printk_skb: 88 callbacks suppressed Jun 25 14:17:13.169215 kernel: audit: type=1130 audit(1719325033.162:189): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:13.169269 kernel: audit: type=1131 audit(1719325033.162:190): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:13.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:13.180214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:17:14.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:14.545932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:17:14.550667 kernel: audit: type=1130 audit(1719325034.545:191): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:14.668142 kubelet[2537]: E0625 14:17:14.668025 2537 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 25 14:17:14.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:17:14.673033 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 25 14:17:14.673453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 25 14:17:14.679532 kernel: audit: type=1131 audit(1719325034.673:192): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jun 25 14:17:16.424054 containerd[1911]: time="2024-06-25T14:17:16.423991261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:16.426311 containerd[1911]: time="2024-06-25T14:17:16.426238468Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=65200786" Jun 25 14:17:16.427196 containerd[1911]: time="2024-06-25T14:17:16.427155441Z" level=info msg="ImageCreate event name:\"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:16.431760 containerd[1911]: time="2024-06-25T14:17:16.431679545Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:16.436454 containerd[1911]: time="2024-06-25T14:17:16.436388389Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:16.439301 containerd[1911]: time="2024-06-25T14:17:16.439231174Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"65198393\" in 5.030089106s" Jun 25 14:17:16.441847 containerd[1911]: time="2024-06-25T14:17:16.439297487Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Jun 25 14:17:16.481691 containerd[1911]: time="2024-06-25T14:17:16.481638041Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Jun 25 14:17:17.101059 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount760059500.mount: Deactivated successfully. Jun 25 14:17:17.664581 containerd[1911]: time="2024-06-25T14:17:17.664516087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:17.667231 containerd[1911]: time="2024-06-25T14:17:17.667173746Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=14558462" Jun 25 14:17:17.668972 containerd[1911]: time="2024-06-25T14:17:17.668914722Z" level=info msg="ImageCreate event name:\"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:17.674461 containerd[1911]: time="2024-06-25T14:17:17.674400549Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:17.677747 containerd[1911]: time="2024-06-25T14:17:17.677694638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:17:17.680281 containerd[1911]: time="2024-06-25T14:17:17.680189379Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"14557471\" in 1.198272596s" Jun 25 14:17:17.680710 containerd[1911]: time="2024-06-25T14:17:17.680284024Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:97e04611ad43405a2e5863ae17c6f1bc9181bdefdaa78627c432ef754a4eb108\"" Jun 25 14:17:18.901993 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jun 25 14:17:18.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:18.907634 kernel: audit: type=1131 audit(1719325038.901:193): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:23.679651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:17:23.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:23.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:23.687808 kernel: audit: type=1130 audit(1719325043.678:194): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:23.687895 kernel: audit: type=1131 audit(1719325043.678:195): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:23.694597 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:17:23.740593 systemd[1]: Reloading. Jun 25 14:17:24.187415 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:17:24.401963 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:17:24.401000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:24.407651 kernel: audit: type=1130 audit(1719325044.401:196): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:24.411756 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:17:24.412938 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:17:24.414113 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:17:24.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:24.418689 kernel: audit: type=1131 audit(1719325044.412:197): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:24.428759 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:17:24.951023 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:17:24.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:24.958695 kernel: audit: type=1130 audit(1719325044.952:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:25.051317 kubelet[2731]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:17:25.051979 kubelet[2731]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:17:25.052088 kubelet[2731]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:17:25.054833 kubelet[2731]: I0625 14:17:25.054443 2731 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:17:25.635423 kubelet[2731]: I0625 14:17:25.635380 2731 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 14:17:25.635713 kubelet[2731]: I0625 14:17:25.635694 2731 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:17:25.636278 kubelet[2731]: I0625 14:17:25.636253 2731 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 14:17:25.668994 kubelet[2731]: I0625 14:17:25.668957 2731 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:17:25.673014 kubelet[2731]: E0625 14:17:25.672954 2731 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://172.31.29.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:25.690402 kubelet[2731]: W0625 14:17:25.690324 2731 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 14:17:25.693236 kubelet[2731]: I0625 14:17:25.693186 2731 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:17:25.693987 kubelet[2731]: I0625 14:17:25.693945 2731 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:17:25.694289 kubelet[2731]: I0625 14:17:25.694244 2731 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:17:25.694500 kubelet[2731]: I0625 14:17:25.694302 2731 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:17:25.694500 kubelet[2731]: I0625 14:17:25.694323 2731 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:17:25.694683 kubelet[2731]: I0625 14:17:25.694545 2731 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:17:25.696942 kubelet[2731]: I0625 14:17:25.696881 2731 kubelet.go:393] "Attempting to sync node with API server" Jun 25 14:17:25.696942 kubelet[2731]: I0625 14:17:25.696934 2731 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:17:25.697143 kubelet[2731]: I0625 14:17:25.697005 2731 kubelet.go:309] "Adding apiserver pod source" Jun 25 14:17:25.697143 kubelet[2731]: I0625 14:17:25.697030 2731 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:17:25.700199 kubelet[2731]: I0625 14:17:25.700150 2731 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:17:25.705208 kubelet[2731]: W0625 14:17:25.705152 2731 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 25 14:17:25.706235 kubelet[2731]: I0625 14:17:25.706181 2731 server.go:1232] "Started kubelet" Jun 25 14:17:25.706485 kubelet[2731]: W0625 14:17:25.706427 2731 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.29.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:25.706667 kubelet[2731]: E0625 14:17:25.706645 2731 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:25.706952 kubelet[2731]: W0625 14:17:25.706899 2731 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.29.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-41&limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:25.707107 kubelet[2731]: E0625 14:17:25.707085 2731 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-41&limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:25.707557 kubelet[2731]: I0625 14:17:25.707530 2731 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:17:25.710214 kubelet[2731]: I0625 14:17:25.710175 2731 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:17:25.711883 kubelet[2731]: I0625 14:17:25.711836 2731 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 14:17:25.712325 kubelet[2731]: I0625 14:17:25.712280 2731 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:17:25.715713 kubelet[2731]: E0625 14:17:25.715502 2731 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-29-41.17dc45050abf324e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-29-41", UID:"ip-172-31-29-41", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-29-41"}, FirstTimestamp:time.Date(2024, time.June, 25, 14, 17, 25, 706142286, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 14, 17, 25, 706142286, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-29-41"}': 'Post "https://172.31.29.41:6443/api/v1/namespaces/default/events": dial tcp 172.31.29.41:6443: connect: connection refused'(may retry after sleeping) Jun 25 14:17:25.716748 kubelet[2731]: E0625 14:17:25.716701 2731 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 14:17:25.716912 kubelet[2731]: E0625 14:17:25.716756 2731 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:17:25.718940 kubelet[2731]: I0625 14:17:25.718887 2731 server.go:462] "Adding debug handlers to kubelet server" Jun 25 14:17:25.718000 audit[2741]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2741 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:25.718000 audit[2741]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffa323a60 a2=0 a3=1 items=0 ppid=2731 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.726420 kubelet[2731]: I0625 14:17:25.723357 2731 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:17:25.726420 kubelet[2731]: I0625 14:17:25.724057 2731 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 14:17:25.726420 kubelet[2731]: I0625 14:17:25.724170 2731 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 14:17:25.726420 kubelet[2731]: W0625 14:17:25.724978 2731 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.29.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:25.726420 kubelet[2731]: E0625 14:17:25.725044 2731 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:25.726420 kubelet[2731]: E0625 14:17:25.725188 2731 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-41?timeout=10s\": dial tcp 172.31.29.41:6443: connect: connection refused" interval="200ms" Jun 25 14:17:25.728155 kernel: audit: type=1325 audit(1719325045.718:199): table=mangle:26 family=2 entries=2 op=nft_register_chain pid=2741 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:25.728226 kernel: audit: type=1300 audit(1719325045.718:199): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffa323a60 a2=0 a3=1 items=0 ppid=2731 pid=2741 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.718000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:17:25.733168 kernel: audit: type=1327 audit(1719325045.718:199): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:17:25.733000 audit[2743]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=2743 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:25.733000 audit[2743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa29e080 a2=0 a3=1 items=0 ppid=2731 pid=2743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.743245 kernel: audit: type=1325 audit(1719325045.733:200): table=filter:27 family=2 entries=1 op=nft_register_chain pid=2743 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:25.743458 kernel: audit: type=1300 audit(1719325045.733:200): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa29e080 a2=0 a3=1 items=0 ppid=2731 pid=2743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:17:25.748237 kernel: audit: type=1327 audit(1719325045.733:200): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:17:25.752000 audit[2747]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=2747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:25.752000 audit[2747]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe6f03000 a2=0 a3=1 items=0 ppid=2731 pid=2747 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.752000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:17:25.757687 kernel: audit: type=1325 audit(1719325045.752:201): table=filter:28 family=2 entries=2 op=nft_register_chain pid=2747 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:25.765000 audit[2750]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=2750 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:25.765000 audit[2750]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe8df8a00 a2=0 a3=1 items=0 ppid=2731 pid=2750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.765000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:17:25.786000 audit[2754]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2754 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:25.786000 audit[2754]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffc9f16ae0 a2=0 a3=1 items=0 ppid=2731 pid=2754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.786000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jun 25 14:17:25.789434 kubelet[2731]: I0625 14:17:25.789396 2731 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:17:25.790000 audit[2755]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=2755 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:25.790000 audit[2755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffc3990a0 a2=0 a3=1 items=0 ppid=2731 pid=2755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.790000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jun 25 14:17:25.794095 kubelet[2731]: I0625 14:17:25.794062 2731 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:17:25.794000 audit[2756]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=2756 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:25.794000 audit[2756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcc7094e0 a2=0 a3=1 items=0 ppid=2731 pid=2756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.794000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:17:25.796567 kubelet[2731]: I0625 14:17:25.796538 2731 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:17:25.796696 kubelet[2731]: I0625 14:17:25.796582 2731 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 14:17:25.796762 kubelet[2731]: E0625 14:17:25.796692 2731 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:17:25.797000 audit[2757]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=2757 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:25.797000 audit[2757]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd6f621c0 a2=0 a3=1 items=0 ppid=2731 pid=2757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.797000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jun 25 14:17:25.799000 audit[2758]: NETFILTER_CFG table=nat:34 family=10 entries=2 op=nft_register_chain pid=2758 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:25.799000 audit[2758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=fffff78bd3c0 a2=0 a3=1 items=0 ppid=2731 pid=2758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:17:25.803446 kubelet[2731]: W0625 14:17:25.803365 2731 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.29.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:25.803713 kubelet[2731]: E0625 14:17:25.803691 2731 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:25.805000 audit[2759]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_chain pid=2759 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:25.805000 audit[2759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd08d9900 a2=0 a3=1 items=0 ppid=2731 pid=2759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.805000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jun 25 14:17:25.809000 audit[2760]: NETFILTER_CFG table=filter:36 family=10 entries=2 op=nft_register_chain pid=2760 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:25.809000 audit[2760]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffe60e9f60 a2=0 a3=1 items=0 ppid=2731 pid=2760 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:17:25.812000 audit[2761]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=2761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:25.812000 audit[2761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc0fa9000 a2=0 a3=1 items=0 ppid=2731 pid=2761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:25.812000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jun 25 14:17:25.838768 kubelet[2731]: I0625 14:17:25.838725 2731 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-41" Jun 25 14:17:25.839359 kubelet[2731]: E0625 14:17:25.839329 2731 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.41:6443/api/v1/nodes\": dial tcp 172.31.29.41:6443: connect: connection refused" node="ip-172-31-29-41" Jun 25 14:17:25.846891 kubelet[2731]: I0625 14:17:25.846839 2731 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:17:25.846891 kubelet[2731]: I0625 14:17:25.846890 2731 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:17:25.847116 kubelet[2731]: I0625 14:17:25.846924 2731 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:17:25.849737 kubelet[2731]: I0625 14:17:25.849679 2731 policy_none.go:49] "None policy: Start" Jun 25 14:17:25.850805 kubelet[2731]: I0625 14:17:25.850770 2731 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 14:17:25.850968 kubelet[2731]: I0625 14:17:25.850821 2731 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:17:25.860675 kubelet[2731]: I0625 14:17:25.860592 2731 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:17:25.861085 kubelet[2731]: I0625 14:17:25.861036 2731 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:17:25.866847 kubelet[2731]: E0625 14:17:25.866748 2731 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-29-41\" not found" Jun 25 14:17:25.897233 kubelet[2731]: I0625 14:17:25.897080 2731 topology_manager.go:215] "Topology Admit Handler" podUID="73c731527d8475d2688fd0150ae8c35c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-41" Jun 25 14:17:25.902028 kubelet[2731]: I0625 14:17:25.901981 2731 topology_manager.go:215] "Topology Admit Handler" podUID="1b5987f0d71195c3d87dcb38b8b60733" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-41" Jun 25 14:17:25.911204 kubelet[2731]: I0625 14:17:25.911156 2731 topology_manager.go:215] "Topology Admit Handler" podUID="03efba9855c382525fd29fdf38812f1e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-41" Jun 25 14:17:25.926315 kubelet[2731]: E0625 14:17:25.926265 2731 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-41?timeout=10s\": dial tcp 172.31.29.41:6443: connect: connection refused" interval="400ms" Jun 25 14:17:25.930282 kubelet[2731]: I0625 14:17:25.930246 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73c731527d8475d2688fd0150ae8c35c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-41\" (UID: \"73c731527d8475d2688fd0150ae8c35c\") " pod="kube-system/kube-apiserver-ip-172-31-29-41" Jun 25 14:17:25.930475 kubelet[2731]: I0625 14:17:25.930452 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b5987f0d71195c3d87dcb38b8b60733-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-41\" (UID: \"1b5987f0d71195c3d87dcb38b8b60733\") " pod="kube-system/kube-controller-manager-ip-172-31-29-41" Jun 25 14:17:25.930668 kubelet[2731]: I0625 14:17:25.930646 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b5987f0d71195c3d87dcb38b8b60733-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-41\" (UID: \"1b5987f0d71195c3d87dcb38b8b60733\") " pod="kube-system/kube-controller-manager-ip-172-31-29-41" Jun 25 14:17:25.930824 kubelet[2731]: I0625 14:17:25.930802 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b5987f0d71195c3d87dcb38b8b60733-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-41\" (UID: \"1b5987f0d71195c3d87dcb38b8b60733\") " pod="kube-system/kube-controller-manager-ip-172-31-29-41" Jun 25 14:17:25.930977 kubelet[2731]: I0625 14:17:25.930956 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73c731527d8475d2688fd0150ae8c35c-ca-certs\") pod \"kube-apiserver-ip-172-31-29-41\" (UID: \"73c731527d8475d2688fd0150ae8c35c\") " pod="kube-system/kube-apiserver-ip-172-31-29-41" Jun 25 14:17:25.931134 kubelet[2731]: I0625 14:17:25.931112 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b5987f0d71195c3d87dcb38b8b60733-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-41\" (UID: \"1b5987f0d71195c3d87dcb38b8b60733\") " pod="kube-system/kube-controller-manager-ip-172-31-29-41" Jun 25 14:17:25.931317 kubelet[2731]: I0625 14:17:25.931296 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b5987f0d71195c3d87dcb38b8b60733-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-41\" (UID: \"1b5987f0d71195c3d87dcb38b8b60733\") " pod="kube-system/kube-controller-manager-ip-172-31-29-41" Jun 25 14:17:25.931463 kubelet[2731]: I0625 14:17:25.931442 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03efba9855c382525fd29fdf38812f1e-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-41\" (UID: \"03efba9855c382525fd29fdf38812f1e\") " pod="kube-system/kube-scheduler-ip-172-31-29-41" Jun 25 14:17:25.931625 kubelet[2731]: I0625 14:17:25.931588 2731 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73c731527d8475d2688fd0150ae8c35c-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-41\" (UID: \"73c731527d8475d2688fd0150ae8c35c\") " pod="kube-system/kube-apiserver-ip-172-31-29-41" Jun 25 14:17:26.042053 kubelet[2731]: I0625 14:17:26.042021 2731 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-41" Jun 25 14:17:26.042832 kubelet[2731]: E0625 14:17:26.042802 2731 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.41:6443/api/v1/nodes\": dial tcp 172.31.29.41:6443: connect: connection refused" node="ip-172-31-29-41" Jun 25 14:17:26.226015 containerd[1911]: time="2024-06-25T14:17:26.225919512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-41,Uid:03efba9855c382525fd29fdf38812f1e,Namespace:kube-system,Attempt:0,}" Jun 25 14:17:26.227595 containerd[1911]: time="2024-06-25T14:17:26.227294737Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-41,Uid:1b5987f0d71195c3d87dcb38b8b60733,Namespace:kube-system,Attempt:0,}" Jun 25 14:17:26.228763 containerd[1911]: time="2024-06-25T14:17:26.228045831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-41,Uid:73c731527d8475d2688fd0150ae8c35c,Namespace:kube-system,Attempt:0,}" Jun 25 14:17:26.326904 kubelet[2731]: E0625 14:17:26.326845 2731 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-41?timeout=10s\": dial tcp 172.31.29.41:6443: connect: connection refused" interval="800ms" Jun 25 14:17:26.445720 kubelet[2731]: I0625 14:17:26.445660 2731 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-41" Jun 25 14:17:26.446273 kubelet[2731]: E0625 14:17:26.446233 2731 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.41:6443/api/v1/nodes\": dial tcp 172.31.29.41:6443: connect: connection refused" node="ip-172-31-29-41" Jun 25 14:17:26.764258 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount67478571.mount: Deactivated successfully. Jun 25 14:17:26.772852 containerd[1911]: time="2024-06-25T14:17:26.772797360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:17:26.777228 containerd[1911]: time="2024-06-25T14:17:26.777174416Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jun 25 14:17:26.777900 containerd[1911]: time="2024-06-25T14:17:26.777844100Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:17:26.779547 containerd[1911]: time="2024-06-25T14:17:26.779488490Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:17:26.782600 containerd[1911]: time="2024-06-25T14:17:26.782543902Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:17:26.783802 containerd[1911]: time="2024-06-25T14:17:26.783749084Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:17:26.784274 containerd[1911]: time="2024-06-25T14:17:26.784212269Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:17:26.786630 containerd[1911]: time="2024-06-25T14:17:26.786531923Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:17:26.789860 containerd[1911]: time="2024-06-25T14:17:26.789790355Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jun 25 14:17:26.790532 containerd[1911]: time="2024-06-25T14:17:26.790479060Z" level=info msg="ImageUpdate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:17:26.792688 containerd[1911]: time="2024-06-25T14:17:26.792594518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:17:26.795528 containerd[1911]: time="2024-06-25T14:17:26.795463903Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 569.34394ms" Jun 25 14:17:26.798235 containerd[1911]: time="2024-06-25T14:17:26.798174345Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:17:26.800598 containerd[1911]: time="2024-06-25T14:17:26.800518239Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:17:26.803276 containerd[1911]: time="2024-06-25T14:17:26.803224241Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:17:26.807320 containerd[1911]: time="2024-06-25T14:17:26.807253723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 579.025969ms" Jun 25 14:17:26.833219 containerd[1911]: time="2024-06-25T14:17:26.833139685Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 25 14:17:26.835385 containerd[1911]: time="2024-06-25T14:17:26.835321577Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 607.88471ms" Jun 25 14:17:26.849369 kubelet[2731]: W0625 14:17:26.849170 2731 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://172.31.29.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:26.849369 kubelet[2731]: E0625 14:17:26.849281 2731 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://172.31.29.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:27.020040 kubelet[2731]: W0625 14:17:27.018945 2731 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://172.31.29.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:27.020040 kubelet[2731]: E0625 14:17:27.019032 2731 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://172.31.29.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:27.064391 kubelet[2731]: W0625 14:17:27.064293 2731 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://172.31.29.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-41&limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:27.064391 kubelet[2731]: E0625 14:17:27.064388 2731 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://172.31.29.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-29-41&limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:27.077382 kubelet[2731]: W0625 14:17:27.077245 2731 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://172.31.29.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:27.077382 kubelet[2731]: E0625 14:17:27.077325 2731 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://172.31.29.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 172.31.29.41:6443: connect: connection refused Jun 25 14:17:27.128389 kubelet[2731]: E0625 14:17:27.128299 2731 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.29.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-29-41?timeout=10s\": dial tcp 172.31.29.41:6443: connect: connection refused" interval="1.6s" Jun 25 14:17:27.148810 containerd[1911]: time="2024-06-25T14:17:27.148284973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:17:27.148810 containerd[1911]: time="2024-06-25T14:17:27.148365723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:27.148810 containerd[1911]: time="2024-06-25T14:17:27.148403163Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:17:27.148810 containerd[1911]: time="2024-06-25T14:17:27.148428592Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:27.158527 containerd[1911]: time="2024-06-25T14:17:27.158359196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:17:27.159561 containerd[1911]: time="2024-06-25T14:17:27.158464798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:27.159933 containerd[1911]: time="2024-06-25T14:17:27.159817029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:17:27.160133 containerd[1911]: time="2024-06-25T14:17:27.159881675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:27.168365 containerd[1911]: time="2024-06-25T14:17:27.168153154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:17:27.168365 containerd[1911]: time="2024-06-25T14:17:27.168282852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:27.168881 containerd[1911]: time="2024-06-25T14:17:27.168745892Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:17:27.169095 containerd[1911]: time="2024-06-25T14:17:27.169010329Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:27.253960 kubelet[2731]: I0625 14:17:27.253245 2731 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-41" Jun 25 14:17:27.253960 kubelet[2731]: E0625 14:17:27.253910 2731 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://172.31.29.41:6443/api/v1/nodes\": dial tcp 172.31.29.41:6443: connect: connection refused" node="ip-172-31-29-41" Jun 25 14:17:27.328063 containerd[1911]: time="2024-06-25T14:17:27.326900285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-29-41,Uid:73c731527d8475d2688fd0150ae8c35c,Namespace:kube-system,Attempt:0,} returns sandbox id \"7ff63f866cbdb4a7cb996f0ddd39177e55730050e02b1d4daf2706cffcb66d5a\"" Jun 25 14:17:27.335646 containerd[1911]: time="2024-06-25T14:17:27.335552879Z" level=info msg="CreateContainer within sandbox \"7ff63f866cbdb4a7cb996f0ddd39177e55730050e02b1d4daf2706cffcb66d5a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 25 14:17:27.355590 containerd[1911]: time="2024-06-25T14:17:27.355453797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-29-41,Uid:1b5987f0d71195c3d87dcb38b8b60733,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3ceb0f749ad7ac2421fafc613bef82f84bd0a757c2afccccb8659ed949fe52b\"" Jun 25 14:17:27.356203 containerd[1911]: time="2024-06-25T14:17:27.356141433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-29-41,Uid:03efba9855c382525fd29fdf38812f1e,Namespace:kube-system,Attempt:0,} returns sandbox id \"182d758dc33ab09fc04c13d88a55275d09a3f61e1967cb20718d27f56a6fa0c7\"" Jun 25 14:17:27.363417 containerd[1911]: time="2024-06-25T14:17:27.363340801Z" level=info msg="CreateContainer within sandbox \"7ff63f866cbdb4a7cb996f0ddd39177e55730050e02b1d4daf2706cffcb66d5a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ffa4127845d9ea49047a14c1d5c33e2b8b8ea19abc4631e9e3704857fac49a8b\"" Jun 25 14:17:27.363735 containerd[1911]: time="2024-06-25T14:17:27.363685339Z" level=info msg="CreateContainer within sandbox \"182d758dc33ab09fc04c13d88a55275d09a3f61e1967cb20718d27f56a6fa0c7\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 25 14:17:27.364396 containerd[1911]: time="2024-06-25T14:17:27.364322515Z" level=info msg="CreateContainer within sandbox \"b3ceb0f749ad7ac2421fafc613bef82f84bd0a757c2afccccb8659ed949fe52b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 25 14:17:27.367290 containerd[1911]: time="2024-06-25T14:17:27.367228449Z" level=info msg="StartContainer for \"ffa4127845d9ea49047a14c1d5c33e2b8b8ea19abc4631e9e3704857fac49a8b\"" Jun 25 14:17:27.394937 containerd[1911]: time="2024-06-25T14:17:27.394837472Z" level=info msg="CreateContainer within sandbox \"182d758dc33ab09fc04c13d88a55275d09a3f61e1967cb20718d27f56a6fa0c7\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1fb38883ca70e2a6ffc2908ec989fabe3bd1567fea31bb3f97fc7f7a70bbabdd\"" Jun 25 14:17:27.395566 containerd[1911]: time="2024-06-25T14:17:27.395520296Z" level=info msg="StartContainer for \"1fb38883ca70e2a6ffc2908ec989fabe3bd1567fea31bb3f97fc7f7a70bbabdd\"" Jun 25 14:17:27.396455 containerd[1911]: time="2024-06-25T14:17:27.396378935Z" level=info msg="CreateContainer within sandbox \"b3ceb0f749ad7ac2421fafc613bef82f84bd0a757c2afccccb8659ed949fe52b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5cf92579ad3d71d493a154645f6357668af6a54c8c32e5c54affc748a7702788\"" Jun 25 14:17:27.397285 containerd[1911]: time="2024-06-25T14:17:27.397228658Z" level=info msg="StartContainer for \"5cf92579ad3d71d493a154645f6357668af6a54c8c32e5c54affc748a7702788\"" Jun 25 14:17:27.526904 containerd[1911]: time="2024-06-25T14:17:27.526785594Z" level=info msg="StartContainer for \"ffa4127845d9ea49047a14c1d5c33e2b8b8ea19abc4631e9e3704857fac49a8b\" returns successfully" Jun 25 14:17:27.652144 containerd[1911]: time="2024-06-25T14:17:27.651979495Z" level=info msg="StartContainer for \"5cf92579ad3d71d493a154645f6357668af6a54c8c32e5c54affc748a7702788\" returns successfully" Jun 25 14:17:27.671946 containerd[1911]: time="2024-06-25T14:17:27.671884084Z" level=info msg="StartContainer for \"1fb38883ca70e2a6ffc2908ec989fabe3bd1567fea31bb3f97fc7f7a70bbabdd\" returns successfully" Jun 25 14:17:28.862161 kubelet[2731]: I0625 14:17:28.862096 2731 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-41" Jun 25 14:17:31.628181 kubelet[2731]: I0625 14:17:31.628118 2731 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-29-41" Jun 25 14:17:31.628800 kubelet[2731]: E0625 14:17:31.628498 2731 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-29-41\" not found" node="ip-172-31-29-41" Jun 25 14:17:31.702682 kubelet[2731]: I0625 14:17:31.702539 2731 apiserver.go:52] "Watching apiserver" Jun 25 14:17:31.725043 kubelet[2731]: I0625 14:17:31.724945 2731 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 14:17:31.792676 kubelet[2731]: E0625 14:17:31.792306 2731 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ip-172-31-29-41.17dc45050abf324e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ip-172-31-29-41", UID:"ip-172-31-29-41", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ip-172-31-29-41"}, FirstTimestamp:time.Date(2024, time.June, 25, 14, 17, 25, 706142286, time.Local), LastTimestamp:time.Date(2024, time.June, 25, 14, 17, 25, 706142286, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ip-172-31-29-41"}': 'namespaces "default" not found' (will not retry!) Jun 25 14:17:32.627858 update_engine[1899]: I0625 14:17:32.627725 1899 update_attempter.cc:509] Updating boot flags... Jun 25 14:17:32.853654 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3024) Jun 25 14:17:33.487136 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 33 scanned by (udev-worker) (3023) Jun 25 14:17:35.120272 systemd[1]: Reloading. Jun 25 14:17:35.522860 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 25 14:17:35.757551 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:17:35.762744 systemd[1]: kubelet.service: Deactivated successfully. Jun 25 14:17:35.763454 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:17:35.768789 kernel: kauditd_printk_skb: 29 callbacks suppressed Jun 25 14:17:35.768881 kernel: audit: type=1131 audit(1719325055.762:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:35.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:35.773085 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 25 14:17:36.193957 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 25 14:17:36.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:36.207668 kernel: audit: type=1130 audit(1719325056.196:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:36.371142 kubelet[3283]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:17:36.371811 kubelet[3283]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 25 14:17:36.371942 kubelet[3283]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 25 14:17:36.372183 kubelet[3283]: I0625 14:17:36.372124 3283 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 25 14:17:36.380592 kubelet[3283]: I0625 14:17:36.380528 3283 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Jun 25 14:17:36.380592 kubelet[3283]: I0625 14:17:36.380580 3283 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 25 14:17:36.381012 kubelet[3283]: I0625 14:17:36.380971 3283 server.go:895] "Client rotation is on, will bootstrap in background" Jun 25 14:17:36.385813 kubelet[3283]: I0625 14:17:36.384497 3283 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 25 14:17:36.389675 kubelet[3283]: I0625 14:17:36.388825 3283 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 25 14:17:36.411502 kubelet[3283]: W0625 14:17:36.410065 3283 machine.go:65] Cannot read vendor id correctly, set empty. Jun 25 14:17:36.427182 kubelet[3283]: I0625 14:17:36.427128 3283 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 25 14:17:36.431101 kubelet[3283]: I0625 14:17:36.431048 3283 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 25 14:17:36.431743 kubelet[3283]: I0625 14:17:36.431690 3283 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jun 25 14:17:36.432019 kubelet[3283]: I0625 14:17:36.431995 3283 topology_manager.go:138] "Creating topology manager with none policy" Jun 25 14:17:36.432159 kubelet[3283]: I0625 14:17:36.432139 3283 container_manager_linux.go:301] "Creating device plugin manager" Jun 25 14:17:36.432354 kubelet[3283]: I0625 14:17:36.432313 3283 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:17:36.432689 kubelet[3283]: I0625 14:17:36.432668 3283 kubelet.go:393] "Attempting to sync node with API server" Jun 25 14:17:36.432862 kubelet[3283]: I0625 14:17:36.432840 3283 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 25 14:17:36.433038 kubelet[3283]: I0625 14:17:36.433006 3283 kubelet.go:309] "Adding apiserver pod source" Jun 25 14:17:36.433176 kubelet[3283]: I0625 14:17:36.433156 3283 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 25 14:17:36.435472 kubelet[3283]: I0625 14:17:36.435415 3283 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jun 25 14:17:36.437194 kubelet[3283]: I0625 14:17:36.437146 3283 server.go:1232] "Started kubelet" Jun 25 14:17:36.442775 kubelet[3283]: I0625 14:17:36.442737 3283 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 25 14:17:36.445503 kubelet[3283]: I0625 14:17:36.444137 3283 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jun 25 14:17:36.454518 kubelet[3283]: I0625 14:17:36.454483 3283 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Jun 25 14:17:36.456220 kubelet[3283]: I0625 14:17:36.456179 3283 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 25 14:17:36.463238 kubelet[3283]: I0625 14:17:36.463200 3283 volume_manager.go:291] "Starting Kubelet Volume Manager" Jun 25 14:17:36.466937 kubelet[3283]: E0625 14:17:36.466897 3283 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Jun 25 14:17:36.467769 kubelet[3283]: E0625 14:17:36.467696 3283 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 25 14:17:36.468876 kubelet[3283]: I0625 14:17:36.468829 3283 reconciler_new.go:29] "Reconciler: start to sync state" Jun 25 14:17:36.492790 kubelet[3283]: I0625 14:17:36.467253 3283 server.go:462] "Adding debug handlers to kubelet server" Jun 25 14:17:36.505859 kubelet[3283]: I0625 14:17:36.467296 3283 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jun 25 14:17:36.539233 kubelet[3283]: I0625 14:17:36.538418 3283 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 25 14:17:36.542708 kubelet[3283]: I0625 14:17:36.541000 3283 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 25 14:17:36.542708 kubelet[3283]: I0625 14:17:36.541065 3283 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 25 14:17:36.542708 kubelet[3283]: I0625 14:17:36.541119 3283 kubelet.go:2303] "Starting kubelet main sync loop" Jun 25 14:17:36.542708 kubelet[3283]: E0625 14:17:36.541242 3283 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 25 14:17:36.592028 kubelet[3283]: E0625 14:17:36.590983 3283 container_manager_linux.go:881] "Unable to get rootfs data from cAdvisor interface" err="unable to find data in memory cache" Jun 25 14:17:36.613953 kubelet[3283]: I0625 14:17:36.598685 3283 kubelet_node_status.go:70] "Attempting to register node" node="ip-172-31-29-41" Jun 25 14:17:36.625190 kubelet[3283]: I0625 14:17:36.624881 3283 kubelet_node_status.go:108] "Node was previously registered" node="ip-172-31-29-41" Jun 25 14:17:36.625190 kubelet[3283]: I0625 14:17:36.625014 3283 kubelet_node_status.go:73] "Successfully registered node" node="ip-172-31-29-41" Jun 25 14:17:36.649129 kubelet[3283]: E0625 14:17:36.649072 3283 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 25 14:17:36.804661 kubelet[3283]: I0625 14:17:36.804373 3283 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 25 14:17:36.804661 kubelet[3283]: I0625 14:17:36.804462 3283 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 25 14:17:36.805315 kubelet[3283]: I0625 14:17:36.804542 3283 state_mem.go:36] "Initialized new in-memory state store" Jun 25 14:17:36.805706 kubelet[3283]: I0625 14:17:36.805664 3283 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 25 14:17:36.805793 kubelet[3283]: I0625 14:17:36.805724 3283 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 25 14:17:36.805793 kubelet[3283]: I0625 14:17:36.805744 3283 policy_none.go:49] "None policy: Start" Jun 25 14:17:36.807557 kubelet[3283]: I0625 14:17:36.807381 3283 memory_manager.go:169] "Starting memorymanager" policy="None" Jun 25 14:17:36.807557 kubelet[3283]: I0625 14:17:36.807442 3283 state_mem.go:35] "Initializing new in-memory state store" Jun 25 14:17:36.807945 kubelet[3283]: I0625 14:17:36.807903 3283 state_mem.go:75] "Updated machine memory state" Jun 25 14:17:36.810477 kubelet[3283]: I0625 14:17:36.810420 3283 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 25 14:17:36.818085 kubelet[3283]: I0625 14:17:36.816008 3283 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 25 14:17:36.849256 kubelet[3283]: I0625 14:17:36.849221 3283 topology_manager.go:215] "Topology Admit Handler" podUID="73c731527d8475d2688fd0150ae8c35c" podNamespace="kube-system" podName="kube-apiserver-ip-172-31-29-41" Jun 25 14:17:36.850812 kubelet[3283]: I0625 14:17:36.850776 3283 topology_manager.go:215] "Topology Admit Handler" podUID="1b5987f0d71195c3d87dcb38b8b60733" podNamespace="kube-system" podName="kube-controller-manager-ip-172-31-29-41" Jun 25 14:17:36.860898 kubelet[3283]: I0625 14:17:36.851711 3283 topology_manager.go:215] "Topology Admit Handler" podUID="03efba9855c382525fd29fdf38812f1e" podNamespace="kube-system" podName="kube-scheduler-ip-172-31-29-41" Jun 25 14:17:36.988779 kubelet[3283]: I0625 14:17:36.988678 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/73c731527d8475d2688fd0150ae8c35c-k8s-certs\") pod \"kube-apiserver-ip-172-31-29-41\" (UID: \"73c731527d8475d2688fd0150ae8c35c\") " pod="kube-system/kube-apiserver-ip-172-31-29-41" Jun 25 14:17:36.989085 kubelet[3283]: I0625 14:17:36.989050 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/73c731527d8475d2688fd0150ae8c35c-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-29-41\" (UID: \"73c731527d8475d2688fd0150ae8c35c\") " pod="kube-system/kube-apiserver-ip-172-31-29-41" Jun 25 14:17:36.989265 kubelet[3283]: I0625 14:17:36.989246 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1b5987f0d71195c3d87dcb38b8b60733-ca-certs\") pod \"kube-controller-manager-ip-172-31-29-41\" (UID: \"1b5987f0d71195c3d87dcb38b8b60733\") " pod="kube-system/kube-controller-manager-ip-172-31-29-41" Jun 25 14:17:36.989478 kubelet[3283]: I0625 14:17:36.989444 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1b5987f0d71195c3d87dcb38b8b60733-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-29-41\" (UID: \"1b5987f0d71195c3d87dcb38b8b60733\") " pod="kube-system/kube-controller-manager-ip-172-31-29-41" Jun 25 14:17:36.989737 kubelet[3283]: I0625 14:17:36.989704 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1b5987f0d71195c3d87dcb38b8b60733-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-29-41\" (UID: \"1b5987f0d71195c3d87dcb38b8b60733\") " pod="kube-system/kube-controller-manager-ip-172-31-29-41" Jun 25 14:17:36.989972 kubelet[3283]: I0625 14:17:36.989937 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/03efba9855c382525fd29fdf38812f1e-kubeconfig\") pod \"kube-scheduler-ip-172-31-29-41\" (UID: \"03efba9855c382525fd29fdf38812f1e\") " pod="kube-system/kube-scheduler-ip-172-31-29-41" Jun 25 14:17:36.990192 kubelet[3283]: I0625 14:17:36.990172 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/73c731527d8475d2688fd0150ae8c35c-ca-certs\") pod \"kube-apiserver-ip-172-31-29-41\" (UID: \"73c731527d8475d2688fd0150ae8c35c\") " pod="kube-system/kube-apiserver-ip-172-31-29-41" Jun 25 14:17:36.990380 kubelet[3283]: I0625 14:17:36.990361 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1b5987f0d71195c3d87dcb38b8b60733-k8s-certs\") pod \"kube-controller-manager-ip-172-31-29-41\" (UID: \"1b5987f0d71195c3d87dcb38b8b60733\") " pod="kube-system/kube-controller-manager-ip-172-31-29-41" Jun 25 14:17:36.990579 kubelet[3283]: I0625 14:17:36.990559 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b5987f0d71195c3d87dcb38b8b60733-kubeconfig\") pod \"kube-controller-manager-ip-172-31-29-41\" (UID: \"1b5987f0d71195c3d87dcb38b8b60733\") " pod="kube-system/kube-controller-manager-ip-172-31-29-41" Jun 25 14:17:37.434791 kubelet[3283]: I0625 14:17:37.434691 3283 apiserver.go:52] "Watching apiserver" Jun 25 14:17:37.508482 kubelet[3283]: I0625 14:17:37.508433 3283 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jun 25 14:17:37.672938 kubelet[3283]: E0625 14:17:37.672897 3283 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ip-172-31-29-41\" already exists" pod="kube-system/kube-apiserver-ip-172-31-29-41" Jun 25 14:17:37.731861 kubelet[3283]: I0625 14:17:37.731577 3283 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-29-41" podStartSLOduration=1.7313872259999998 podCreationTimestamp="2024-06-25 14:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:17:37.707052747 +0000 UTC m=+1.478140199" watchObservedRunningTime="2024-06-25 14:17:37.731387226 +0000 UTC m=+1.502474678" Jun 25 14:17:37.771762 kubelet[3283]: I0625 14:17:37.771706 3283 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-29-41" podStartSLOduration=1.7716526510000001 podCreationTimestamp="2024-06-25 14:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:17:37.732166142 +0000 UTC m=+1.503253918" watchObservedRunningTime="2024-06-25 14:17:37.771652651 +0000 UTC m=+1.542740103" Jun 25 14:17:37.812978 kubelet[3283]: I0625 14:17:37.812923 3283 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-29-41" podStartSLOduration=1.8128404900000001 podCreationTimestamp="2024-06-25 14:17:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:17:37.772978113 +0000 UTC m=+1.544065577" watchObservedRunningTime="2024-06-25 14:17:37.81284049 +0000 UTC m=+1.583927942" Jun 25 14:17:43.919731 sudo[2242]: pam_unix(sudo:session): session closed for user root Jun 25 14:17:43.919000 audit[2242]: USER_END pid=2242 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:17:43.924730 kernel: audit: type=1106 audit(1719325063.919:213): pid=2242 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:17:43.924866 kernel: audit: type=1104 audit(1719325063.919:214): pid=2242 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:17:43.919000 audit[2242]: CRED_DISP pid=2242 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jun 25 14:17:43.948514 sshd[2238]: pam_unix(sshd:session): session closed for user core Jun 25 14:17:43.950000 audit[2238]: USER_END pid=2238 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:43.954272 systemd[1]: sshd@6-172.31.29.41:22-139.178.68.195:47678.service: Deactivated successfully. Jun 25 14:17:43.955886 systemd[1]: session-7.scope: Deactivated successfully. Jun 25 14:17:43.950000 audit[2238]: CRED_DISP pid=2238 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:43.963511 kernel: audit: type=1106 audit(1719325063.950:215): pid=2238 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:43.963691 kernel: audit: type=1104 audit(1719325063.950:216): pid=2238 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:17:43.959059 systemd-logind[1895]: Session 7 logged out. Waiting for processes to exit. Jun 25 14:17:43.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.41:22-139.178.68.195:47678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:43.969354 kernel: audit: type=1131 audit(1719325063.950:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-172.31.29.41:22-139.178.68.195:47678 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:17:43.969736 systemd-logind[1895]: Removed session 7. Jun 25 14:17:49.925385 kubelet[3283]: I0625 14:17:49.925318 3283 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 25 14:17:49.926549 containerd[1911]: time="2024-06-25T14:17:49.926453827Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 25 14:17:49.927244 kubelet[3283]: I0625 14:17:49.927052 3283 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 25 14:17:50.201746 kubelet[3283]: I0625 14:17:50.201566 3283 topology_manager.go:215] "Topology Admit Handler" podUID="819f4bba-3f67-4cfd-9c5c-b7f6dba2049b" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-nvw6w" Jun 25 14:17:50.218385 kubelet[3283]: I0625 14:17:50.218325 3283 topology_manager.go:215] "Topology Admit Handler" podUID="1a619ddb-290b-4f1f-ba40-60b30039eff1" podNamespace="kube-system" podName="kube-proxy-8w5pk" Jun 25 14:17:50.371003 kubelet[3283]: I0625 14:17:50.370924 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtqg7\" (UniqueName: \"kubernetes.io/projected/819f4bba-3f67-4cfd-9c5c-b7f6dba2049b-kube-api-access-dtqg7\") pod \"tigera-operator-76c4974c85-nvw6w\" (UID: \"819f4bba-3f67-4cfd-9c5c-b7f6dba2049b\") " pod="tigera-operator/tigera-operator-76c4974c85-nvw6w" Jun 25 14:17:50.371325 kubelet[3283]: I0625 14:17:50.371021 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a619ddb-290b-4f1f-ba40-60b30039eff1-lib-modules\") pod \"kube-proxy-8w5pk\" (UID: \"1a619ddb-290b-4f1f-ba40-60b30039eff1\") " pod="kube-system/kube-proxy-8w5pk" Jun 25 14:17:50.371325 kubelet[3283]: I0625 14:17:50.371092 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nmcpp\" (UniqueName: \"kubernetes.io/projected/1a619ddb-290b-4f1f-ba40-60b30039eff1-kube-api-access-nmcpp\") pod \"kube-proxy-8w5pk\" (UID: \"1a619ddb-290b-4f1f-ba40-60b30039eff1\") " pod="kube-system/kube-proxy-8w5pk" Jun 25 14:17:50.371325 kubelet[3283]: I0625 14:17:50.371158 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/819f4bba-3f67-4cfd-9c5c-b7f6dba2049b-var-lib-calico\") pod \"tigera-operator-76c4974c85-nvw6w\" (UID: \"819f4bba-3f67-4cfd-9c5c-b7f6dba2049b\") " pod="tigera-operator/tigera-operator-76c4974c85-nvw6w" Jun 25 14:17:50.371325 kubelet[3283]: I0625 14:17:50.371223 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1a619ddb-290b-4f1f-ba40-60b30039eff1-kube-proxy\") pod \"kube-proxy-8w5pk\" (UID: \"1a619ddb-290b-4f1f-ba40-60b30039eff1\") " pod="kube-system/kube-proxy-8w5pk" Jun 25 14:17:50.371325 kubelet[3283]: I0625 14:17:50.371271 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a619ddb-290b-4f1f-ba40-60b30039eff1-xtables-lock\") pod \"kube-proxy-8w5pk\" (UID: \"1a619ddb-290b-4f1f-ba40-60b30039eff1\") " pod="kube-system/kube-proxy-8w5pk" Jun 25 14:17:50.553432 containerd[1911]: time="2024-06-25T14:17:50.553228876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-nvw6w,Uid:819f4bba-3f67-4cfd-9c5c-b7f6dba2049b,Namespace:tigera-operator,Attempt:0,}" Jun 25 14:17:50.555906 containerd[1911]: time="2024-06-25T14:17:50.555119067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8w5pk,Uid:1a619ddb-290b-4f1f-ba40-60b30039eff1,Namespace:kube-system,Attempt:0,}" Jun 25 14:17:50.636976 containerd[1911]: time="2024-06-25T14:17:50.636490199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:17:50.636976 containerd[1911]: time="2024-06-25T14:17:50.636591383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:50.636976 containerd[1911]: time="2024-06-25T14:17:50.636648936Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:17:50.636976 containerd[1911]: time="2024-06-25T14:17:50.636679500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:50.653464 containerd[1911]: time="2024-06-25T14:17:50.653244474Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:17:50.653907 containerd[1911]: time="2024-06-25T14:17:50.653392974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:50.654352 containerd[1911]: time="2024-06-25T14:17:50.654134099Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:17:50.654352 containerd[1911]: time="2024-06-25T14:17:50.654236940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:17:50.777130 containerd[1911]: time="2024-06-25T14:17:50.777047206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8w5pk,Uid:1a619ddb-290b-4f1f-ba40-60b30039eff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"66d1c733dfea8f4627cb431eff4f90d030e2b54037c71c608bd48b01264eb7d9\"" Jun 25 14:17:50.790094 containerd[1911]: time="2024-06-25T14:17:50.790023797Z" level=info msg="CreateContainer within sandbox \"66d1c733dfea8f4627cb431eff4f90d030e2b54037c71c608bd48b01264eb7d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 25 14:17:50.825779 containerd[1911]: time="2024-06-25T14:17:50.825287790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-nvw6w,Uid:819f4bba-3f67-4cfd-9c5c-b7f6dba2049b,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"82f57fcd297eeaa98de49aecbe0e29edeccd45465fdc4d7496f4d5454c24758f\"" Jun 25 14:17:50.833692 containerd[1911]: time="2024-06-25T14:17:50.833589717Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 14:17:50.834091 containerd[1911]: time="2024-06-25T14:17:50.834026448Z" level=info msg="CreateContainer within sandbox \"66d1c733dfea8f4627cb431eff4f90d030e2b54037c71c608bd48b01264eb7d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"83a09b66638793325e71ec4e24347a5540632154d5ab48792233f023222db20b\"" Jun 25 14:17:50.836015 containerd[1911]: time="2024-06-25T14:17:50.835927823Z" level=info msg="StartContainer for \"83a09b66638793325e71ec4e24347a5540632154d5ab48792233f023222db20b\"" Jun 25 14:17:50.963655 containerd[1911]: time="2024-06-25T14:17:50.957861820Z" level=info msg="StartContainer for \"83a09b66638793325e71ec4e24347a5540632154d5ab48792233f023222db20b\" returns successfully" Jun 25 14:17:51.093000 audit[3502]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.093000 audit[3502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd610ff30 a2=0 a3=1 items=0 ppid=3461 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.097715 kernel: audit: type=1325 audit(1719325071.093:218): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=3502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.093000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:17:51.106555 kernel: audit: type=1300 audit(1719325071.093:218): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd610ff30 a2=0 a3=1 items=0 ppid=3461 pid=3502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.106726 kernel: audit: type=1327 audit(1719325071.093:218): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:17:51.108000 audit[3503]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.108000 audit[3503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc073a830 a2=0 a3=1 items=0 ppid=3461 pid=3503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.117280 kernel: audit: type=1325 audit(1719325071.108:219): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=3503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.117529 kernel: audit: type=1300 audit(1719325071.108:219): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc073a830 a2=0 a3=1 items=0 ppid=3461 pid=3503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.108000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:17:51.120777 kernel: audit: type=1327 audit(1719325071.108:219): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jun 25 14:17:51.111000 audit[3504]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=3504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.124042 kernel: audit: type=1325 audit(1719325071.111:220): table=nat:40 family=10 entries=1 op=nft_register_chain pid=3504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.111000 audit[3504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1a8f180 a2=0 a3=1 items=0 ppid=3461 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.134913 kernel: audit: type=1300 audit(1719325071.111:220): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1a8f180 a2=0 a3=1 items=0 ppid=3461 pid=3504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.111000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:17:51.139314 kernel: audit: type=1327 audit(1719325071.111:220): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:17:51.139506 kernel: audit: type=1325 audit(1719325071.136:221): table=filter:41 family=10 entries=1 op=nft_register_chain pid=3506 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.136000 audit[3506]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=3506 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.136000 audit[3506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd13583b0 a2=0 a3=1 items=0 ppid=3461 pid=3506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.136000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:17:51.139000 audit[3505]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=3505 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.139000 audit[3505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe6456050 a2=0 a3=1 items=0 ppid=3461 pid=3505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.139000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jun 25 14:17:51.147000 audit[3507]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=3507 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.147000 audit[3507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd2d6dc40 a2=0 a3=1 items=0 ppid=3461 pid=3507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.147000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jun 25 14:17:51.228000 audit[3508]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=3508 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.228000 audit[3508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffe34ff600 a2=0 a3=1 items=0 ppid=3461 pid=3508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.228000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:17:51.235000 audit[3510]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=3510 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.235000 audit[3510]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffccdc0590 a2=0 a3=1 items=0 ppid=3461 pid=3510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.235000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jun 25 14:17:51.245000 audit[3513]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=3513 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.245000 audit[3513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffff5a7640 a2=0 a3=1 items=0 ppid=3461 pid=3513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jun 25 14:17:51.248000 audit[3514]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=3514 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.248000 audit[3514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc0e46100 a2=0 a3=1 items=0 ppid=3461 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.248000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:17:51.255000 audit[3516]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=3516 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.255000 audit[3516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff6a25010 a2=0 a3=1 items=0 ppid=3461 pid=3516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.255000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:17:51.258000 audit[3517]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=3517 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.258000 audit[3517]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb221b60 a2=0 a3=1 items=0 ppid=3461 pid=3517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.258000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:17:51.267000 audit[3519]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=3519 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.267000 audit[3519]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffeacef860 a2=0 a3=1 items=0 ppid=3461 pid=3519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:17:51.279000 audit[3522]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=3522 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.279000 audit[3522]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdc54f4d0 a2=0 a3=1 items=0 ppid=3461 pid=3522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.279000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jun 25 14:17:51.282000 audit[3523]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=3523 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.282000 audit[3523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe50757f0 a2=0 a3=1 items=0 ppid=3461 pid=3523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.282000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:17:51.290000 audit[3525]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=3525 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.290000 audit[3525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff6bd19e0 a2=0 a3=1 items=0 ppid=3461 pid=3525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.290000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:17:51.294000 audit[3526]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=3526 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.294000 audit[3526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc7ea0130 a2=0 a3=1 items=0 ppid=3461 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.294000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:17:51.302000 audit[3528]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=3528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.302000 audit[3528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd17b93c0 a2=0 a3=1 items=0 ppid=3461 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.302000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:17:51.315000 audit[3531]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=3531 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.315000 audit[3531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff4c156c0 a2=0 a3=1 items=0 ppid=3461 pid=3531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.315000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:17:51.326000 audit[3534]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=3534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.326000 audit[3534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffea3a9a00 a2=0 a3=1 items=0 ppid=3461 pid=3534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:17:51.328000 audit[3535]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=3535 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.328000 audit[3535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd95d84e0 a2=0 a3=1 items=0 ppid=3461 pid=3535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:17:51.335000 audit[3537]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=3537 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.335000 audit[3537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd0b35020 a2=0 a3=1 items=0 ppid=3461 pid=3537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:17:51.345000 audit[3540]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=3540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.345000 audit[3540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffaa44380 a2=0 a3=1 items=0 ppid=3461 pid=3540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:17:51.348000 audit[3541]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=3541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.348000 audit[3541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffefa4e270 a2=0 a3=1 items=0 ppid=3461 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:17:51.358000 audit[3543]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=3543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jun 25 14:17:51.358000 audit[3543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffcd014cd0 a2=0 a3=1 items=0 ppid=3461 pid=3543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:17:51.397000 audit[3549]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=3549 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:51.397000 audit[3549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=fffff0769df0 a2=0 a3=1 items=0 ppid=3461 pid=3549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.397000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:51.412000 audit[3549]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=3549 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:17:51.412000 audit[3549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffff0769df0 a2=0 a3=1 items=0 ppid=3461 pid=3549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.412000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:51.417000 audit[3555]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=3555 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.417000 audit[3555]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc28ae960 a2=0 a3=1 items=0 ppid=3461 pid=3555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.417000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jun 25 14:17:51.423000 audit[3557]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=3557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.423000 audit[3557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=fffff31c9b10 a2=0 a3=1 items=0 ppid=3461 pid=3557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.423000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jun 25 14:17:51.432000 audit[3560]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=3560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.432000 audit[3560]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd26d4eb0 a2=0 a3=1 items=0 ppid=3461 pid=3560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.432000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jun 25 14:17:51.436000 audit[3561]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=3561 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.436000 audit[3561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4da0df0 a2=0 a3=1 items=0 ppid=3461 pid=3561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.436000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jun 25 14:17:51.441000 audit[3563]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=3563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.441000 audit[3563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffffa31b10 a2=0 a3=1 items=0 ppid=3461 pid=3563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.441000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jun 25 14:17:51.444000 audit[3564]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=3564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.444000 audit[3564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffae534b0 a2=0 a3=1 items=0 ppid=3461 pid=3564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.444000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jun 25 14:17:51.451000 audit[3566]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=3566 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.451000 audit[3566]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffde306880 a2=0 a3=1 items=0 ppid=3461 pid=3566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.451000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jun 25 14:17:51.462000 audit[3569]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=3569 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.462000 audit[3569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffcf0aaa40 a2=0 a3=1 items=0 ppid=3461 pid=3569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jun 25 14:17:51.464000 audit[3570]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=3570 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.464000 audit[3570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5dd3f10 a2=0 a3=1 items=0 ppid=3461 pid=3570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.464000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jun 25 14:17:51.474000 audit[3572]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=3572 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.474000 audit[3572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe9207850 a2=0 a3=1 items=0 ppid=3461 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.474000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jun 25 14:17:51.477000 audit[3573]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=3573 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.477000 audit[3573]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff9844970 a2=0 a3=1 items=0 ppid=3461 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jun 25 14:17:51.485000 audit[3575]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=3575 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.485000 audit[3575]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe6a4ca00 a2=0 a3=1 items=0 ppid=3461 pid=3575 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.485000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jun 25 14:17:51.503000 audit[3578]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=3578 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.503000 audit[3578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe8c480a0 a2=0 a3=1 items=0 ppid=3461 pid=3578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.503000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jun 25 14:17:51.515000 audit[3581]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=3581 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.515000 audit[3581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffc5bec60 a2=0 a3=1 items=0 ppid=3461 pid=3581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.515000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jun 25 14:17:51.519000 audit[3582]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=3582 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.519000 audit[3582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffec3d4d30 a2=0 a3=1 items=0 ppid=3461 pid=3582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.519000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jun 25 14:17:51.527000 audit[3584]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=3584 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.527000 audit[3584]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffcc7b9360 a2=0 a3=1 items=0 ppid=3461 pid=3584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.527000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:17:51.535000 audit[3587]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=3587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.535000 audit[3587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffdf931060 a2=0 a3=1 items=0 ppid=3461 pid=3587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.535000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jun 25 14:17:51.540000 audit[3588]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=3588 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.540000 audit[3588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc50c8ce0 a2=0 a3=1 items=0 ppid=3461 pid=3588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.540000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jun 25 14:17:51.547000 audit[3590]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=3590 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.547000 audit[3590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd89c6950 a2=0 a3=1 items=0 ppid=3461 pid=3590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.547000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jun 25 14:17:51.550000 audit[3591]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3591 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.550000 audit[3591]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea8827a0 a2=0 a3=1 items=0 ppid=3461 pid=3591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.550000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jun 25 14:17:51.556000 audit[3593]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3593 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.556000 audit[3593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffdf433480 a2=0 a3=1 items=0 ppid=3461 pid=3593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.556000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:17:51.567000 audit[3596]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=3596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jun 25 14:17:51.567000 audit[3596]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe935b100 a2=0 a3=1 items=0 ppid=3461 pid=3596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.567000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jun 25 14:17:51.578000 audit[3598]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=3598 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:17:51.578000 audit[3598]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=ffffc1fd1570 a2=0 a3=1 items=0 ppid=3461 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.578000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:51.580000 audit[3598]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=3598 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jun 25 14:17:51.580000 audit[3598]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffc1fd1570 a2=0 a3=1 items=0 ppid=3461 pid=3598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:17:51.580000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:17:51.718155 kubelet[3283]: I0625 14:17:51.717745 3283 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-8w5pk" podStartSLOduration=1.7176601009999999 podCreationTimestamp="2024-06-25 14:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:17:51.717272823 +0000 UTC m=+15.488360287" watchObservedRunningTime="2024-06-25 14:17:51.717660101 +0000 UTC m=+15.488747577" Jun 25 14:17:51.993864 containerd[1911]: time="2024-06-25T14:17:51.993434674Z" level=error msg="PullImage \"quay.io/tigera/operator:v1.34.0\" failed" error="failed to pull and unpack image \"quay.io/tigera/operator:v1.34.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/58/5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240625%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240625T141751Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=0a024d912118bff32336ff960d0b19330c908446e112966aef1784f988775efd&cf_sign=gwR0fx5YnIN2bgkBjnpoKbjtVm32eZUbbDb72d2ohcrrEuCBJhgQ5nxpckOM17%2Bq841fDkkMgaOHEL%2BM3CsZZ9iAZ2FqiAB6MOcELnguqG7aw5ue74x%2BcAwIm1bIppcnGUMoaJ1wKMyUgeHSKDpoo5Ja7F0pBY%2BOIrfAl2EBqmU0jspxW3vgVRQKq4ZJSO51z75Y%2FAQmZN5wDepQ4pSFysZsiM7BKrapTE033mnpvDgSJagWr5U1psuPTCpysqZWmo3zp04oF%2F5%2FTLn51Uf0y4kOR6xR4ld4QdcL%2F50YAwB1RO6ndi%2FPiQOibsiVu9BqUD6alxhZVcvVVsKCWcN8%2Bg%3D%3D&cf_expiry=1719325671®ion=us-east-1&namespace=tigera&repo_name=operator\": dial tcp: lookup cdn03.quay.io: no such host" Jun 25 14:17:51.994808 containerd[1911]: time="2024-06-25T14:17:51.993519862Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=4034" Jun 25 14:17:51.995311 kubelet[3283]: E0625 14:17:51.995275 3283 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"quay.io/tigera/operator:v1.34.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/58/5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240625%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240625T141751Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=0a024d912118bff32336ff960d0b19330c908446e112966aef1784f988775efd&cf_sign=gwR0fx5YnIN2bgkBjnpoKbjtVm32eZUbbDb72d2ohcrrEuCBJhgQ5nxpckOM17%2Bq841fDkkMgaOHEL%2BM3CsZZ9iAZ2FqiAB6MOcELnguqG7aw5ue74x%2BcAwIm1bIppcnGUMoaJ1wKMyUgeHSKDpoo5Ja7F0pBY%2BOIrfAl2EBqmU0jspxW3vgVRQKq4ZJSO51z75Y%2FAQmZN5wDepQ4pSFysZsiM7BKrapTE033mnpvDgSJagWr5U1psuPTCpysqZWmo3zp04oF%2F5%2FTLn51Uf0y4kOR6xR4ld4QdcL%2F50YAwB1RO6ndi%2FPiQOibsiVu9BqUD6alxhZVcvVVsKCWcN8%2Bg%3D%3D&cf_expiry=1719325671®ion=us-east-1&namespace=tigera&repo_name=operator\": dial tcp: lookup cdn03.quay.io: no such host" image="quay.io/tigera/operator:v1.34.0" Jun 25 14:17:51.995684 kubelet[3283]: E0625 14:17:51.995591 3283 kuberuntime_image.go:53] "Failed to pull image" err="failed to pull and unpack image \"quay.io/tigera/operator:v1.34.0\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \"https://cdn03.quay.io/quayio-production-s3/sha256/58/5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240625%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240625T141751Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=0a024d912118bff32336ff960d0b19330c908446e112966aef1784f988775efd&cf_sign=gwR0fx5YnIN2bgkBjnpoKbjtVm32eZUbbDb72d2ohcrrEuCBJhgQ5nxpckOM17%2Bq841fDkkMgaOHEL%2BM3CsZZ9iAZ2FqiAB6MOcELnguqG7aw5ue74x%2BcAwIm1bIppcnGUMoaJ1wKMyUgeHSKDpoo5Ja7F0pBY%2BOIrfAl2EBqmU0jspxW3vgVRQKq4ZJSO51z75Y%2FAQmZN5wDepQ4pSFysZsiM7BKrapTE033mnpvDgSJagWr5U1psuPTCpysqZWmo3zp04oF%2F5%2FTLn51Uf0y4kOR6xR4ld4QdcL%2F50YAwB1RO6ndi%2FPiQOibsiVu9BqUD6alxhZVcvVVsKCWcN8%2Bg%3D%3D&cf_expiry=1719325671®ion=us-east-1&namespace=tigera&repo_name=operator\": dial tcp: lookup cdn03.quay.io: no such host" image="quay.io/tigera/operator:v1.34.0" Jun 25 14:17:51.996095 kubelet[3283]: E0625 14:17:51.996052 3283 kuberuntime_manager.go:1261] container &Container{Name:tigera-operator,Image:quay.io/tigera/operator:v1.34.0,Command:[operator],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:nil,},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:tigera-operator,ValueFrom:nil,},EnvVar{Name:TIGERA_OPERATOR_INIT_IMAGE_VERSION,Value:v1.34.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:var-lib-calico,ReadOnly:true,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-dtqg7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:kubernetes-services-endpoint,},Optional:*true,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tigera-operator-76c4974c85-nvw6w_tigera-operator(819f4bba-3f67-4cfd-9c5c-b7f6dba2049b): ErrImagePull: failed to pull and unpack image "quay.io/tigera/operator:v1.34.0": failed to copy: httpReadSeeker: failed open: failed to do request: Get "https://cdn03.quay.io/quayio-production-s3/sha256/58/5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240625%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240625T141751Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=0a024d912118bff32336ff960d0b19330c908446e112966aef1784f988775efd&cf_sign=gwR0fx5YnIN2bgkBjnpoKbjtVm32eZUbbDb72d2ohcrrEuCBJhgQ5nxpckOM17%2Bq841fDkkMgaOHEL%2BM3CsZZ9iAZ2FqiAB6MOcELnguqG7aw5ue74x%2BcAwIm1bIppcnGUMoaJ1wKMyUgeHSKDpoo5Ja7F0pBY%2BOIrfAl2EBqmU0jspxW3vgVRQKq4ZJSO51z75Y%2FAQmZN5wDepQ4pSFysZsiM7BKrapTE033mnpvDgSJagWr5U1psuPTCpysqZWmo3zp04oF%2F5%2FTLn51Uf0y4kOR6xR4ld4QdcL%2F50YAwB1RO6ndi%2FPiQOibsiVu9BqUD6alxhZVcvVVsKCWcN8%2Bg%3D%3D&cf_expiry=1719325671®ion=us-east-1&namespace=tigera&repo_name=operator": dial tcp: lookup cdn03.quay.io: no such host Jun 25 14:17:51.997048 kubelet[3283]: E0625 14:17:51.996794 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with ErrImagePull: \"failed to pull and unpack image \\\"quay.io/tigera/operator:v1.34.0\\\": failed to copy: httpReadSeeker: failed open: failed to do request: Get \\\"https://cdn03.quay.io/quayio-production-s3/sha256/58/5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAI5LUAQGPZRPNKSJA%2F20240625%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240625T141751Z&X-Amz-Expires=600&X-Amz-SignedHeaders=host&X-Amz-Signature=0a024d912118bff32336ff960d0b19330c908446e112966aef1784f988775efd&cf_sign=gwR0fx5YnIN2bgkBjnpoKbjtVm32eZUbbDb72d2ohcrrEuCBJhgQ5nxpckOM17%2Bq841fDkkMgaOHEL%2BM3CsZZ9iAZ2FqiAB6MOcELnguqG7aw5ue74x%2BcAwIm1bIppcnGUMoaJ1wKMyUgeHSKDpoo5Ja7F0pBY%2BOIrfAl2EBqmU0jspxW3vgVRQKq4ZJSO51z75Y%2FAQmZN5wDepQ4pSFysZsiM7BKrapTE033mnpvDgSJagWr5U1psuPTCpysqZWmo3zp04oF%2F5%2FTLn51Uf0y4kOR6xR4ld4QdcL%2F50YAwB1RO6ndi%2FPiQOibsiVu9BqUD6alxhZVcvVVsKCWcN8%2Bg%3D%3D&cf_expiry=1719325671®ion=us-east-1&namespace=tigera&repo_name=operator\\\": dial tcp: lookup cdn03.quay.io: no such host\"" pod="tigera-operator/tigera-operator-76c4974c85-nvw6w" podUID="819f4bba-3f67-4cfd-9c5c-b7f6dba2049b" Jun 25 14:17:52.708048 kubelet[3283]: E0625 14:17:52.708010 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/tigera/operator:v1.34.0\\\"\"" pod="tigera-operator/tigera-operator-76c4974c85-nvw6w" podUID="819f4bba-3f67-4cfd-9c5c-b7f6dba2049b" Jun 25 14:18:04.544674 containerd[1911]: time="2024-06-25T14:18:04.544249306Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Jun 25 14:18:05.896660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount569841196.mount: Deactivated successfully. Jun 25 14:18:06.623474 containerd[1911]: time="2024-06-25T14:18:06.623384819Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:06.624886 containerd[1911]: time="2024-06-25T14:18:06.624821884Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=19473634" Jun 25 14:18:06.626801 containerd[1911]: time="2024-06-25T14:18:06.626742420Z" level=info msg="ImageCreate event name:\"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:06.630311 containerd[1911]: time="2024-06-25T14:18:06.630231650Z" level=info msg="ImageUpdate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:06.633820 containerd[1911]: time="2024-06-25T14:18:06.633772312Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:06.636675 containerd[1911]: time="2024-06-25T14:18:06.636587740Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"19467821\" in 2.092261994s" Jun 25 14:18:06.636876 containerd[1911]: time="2024-06-25T14:18:06.636837461Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:5886f48e233edcb89c0e8e3cdbdc40101f3c2dfbe67d7717f01d19c27cd78f92\"" Jun 25 14:18:06.640781 containerd[1911]: time="2024-06-25T14:18:06.640721828Z" level=info msg="CreateContainer within sandbox \"82f57fcd297eeaa98de49aecbe0e29edeccd45465fdc4d7496f4d5454c24758f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jun 25 14:18:06.660800 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount486207993.mount: Deactivated successfully. Jun 25 14:18:06.675435 containerd[1911]: time="2024-06-25T14:18:06.675374416Z" level=info msg="CreateContainer within sandbox \"82f57fcd297eeaa98de49aecbe0e29edeccd45465fdc4d7496f4d5454c24758f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"407220353518995d24ed408fec3387c0f6f8e4da021792a0fc3399258284da81\"" Jun 25 14:18:06.677033 containerd[1911]: time="2024-06-25T14:18:06.676982446Z" level=info msg="StartContainer for \"407220353518995d24ed408fec3387c0f6f8e4da021792a0fc3399258284da81\"" Jun 25 14:18:06.775464 containerd[1911]: time="2024-06-25T14:18:06.775379002Z" level=info msg="StartContainer for \"407220353518995d24ed408fec3387c0f6f8e4da021792a0fc3399258284da81\" returns successfully" Jun 25 14:18:06.840679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount196523434.mount: Deactivated successfully. Jun 25 14:18:07.761056 kubelet[3283]: I0625 14:18:07.761004 3283 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-nvw6w" podStartSLOduration=1.952681334 podCreationTimestamp="2024-06-25 14:17:50 +0000 UTC" firstStartedPulling="2024-06-25 14:17:50.829090757 +0000 UTC m=+14.600178197" lastFinishedPulling="2024-06-25 14:18:06.637327291 +0000 UTC m=+30.408414719" observedRunningTime="2024-06-25 14:18:07.760672935 +0000 UTC m=+31.531760387" watchObservedRunningTime="2024-06-25 14:18:07.760917856 +0000 UTC m=+31.532005308" Jun 25 14:18:12.137000 audit[3648]: NETFILTER_CFG table=filter:89 family=2 entries=14 op=nft_register_rule pid=3648 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:12.138948 kernel: kauditd_printk_skb: 143 callbacks suppressed Jun 25 14:18:12.139072 kernel: audit: type=1325 audit(1719325092.137:269): table=filter:89 family=2 entries=14 op=nft_register_rule pid=3648 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:12.137000 audit[3648]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffeb53f380 a2=0 a3=1 items=0 ppid=3461 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:12.147004 kernel: audit: type=1300 audit(1719325092.137:269): arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffeb53f380 a2=0 a3=1 items=0 ppid=3461 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:12.137000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:12.151681 kernel: audit: type=1327 audit(1719325092.137:269): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:12.147000 audit[3648]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=3648 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:12.147000 audit[3648]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeb53f380 a2=0 a3=1 items=0 ppid=3461 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:12.155761 kernel: audit: type=1325 audit(1719325092.147:270): table=nat:90 family=2 entries=12 op=nft_register_rule pid=3648 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:12.147000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:12.164700 kernel: audit: type=1300 audit(1719325092.147:270): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffeb53f380 a2=0 a3=1 items=0 ppid=3461 pid=3648 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:12.164856 kernel: audit: type=1327 audit(1719325092.147:270): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:12.167000 audit[3650]: NETFILTER_CFG table=filter:91 family=2 entries=15 op=nft_register_rule pid=3650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:12.167000 audit[3650]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffcf906470 a2=0 a3=1 items=0 ppid=3461 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:12.176495 kernel: audit: type=1325 audit(1719325092.167:271): table=filter:91 family=2 entries=15 op=nft_register_rule pid=3650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:12.176681 kernel: audit: type=1300 audit(1719325092.167:271): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffcf906470 a2=0 a3=1 items=0 ppid=3461 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:12.167000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:12.180659 kernel: audit: type=1327 audit(1719325092.167:271): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:12.167000 audit[3650]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=3650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:12.167000 audit[3650]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcf906470 a2=0 a3=1 items=0 ppid=3461 pid=3650 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:12.167000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:12.190660 kernel: audit: type=1325 audit(1719325092.167:272): table=nat:92 family=2 entries=12 op=nft_register_rule pid=3650 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:12.300693 kubelet[3283]: I0625 14:18:12.300625 3283 topology_manager.go:215] "Topology Admit Handler" podUID="8b2ec7b2-bb69-405a-885b-f51f518d60ce" podNamespace="calico-system" podName="calico-typha-65466bdf8-nqwhf" Jun 25 14:18:12.430698 kubelet[3283]: I0625 14:18:12.430636 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8b2ec7b2-bb69-405a-885b-f51f518d60ce-typha-certs\") pod \"calico-typha-65466bdf8-nqwhf\" (UID: \"8b2ec7b2-bb69-405a-885b-f51f518d60ce\") " pod="calico-system/calico-typha-65466bdf8-nqwhf" Jun 25 14:18:12.430898 kubelet[3283]: I0625 14:18:12.430738 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b2ec7b2-bb69-405a-885b-f51f518d60ce-tigera-ca-bundle\") pod \"calico-typha-65466bdf8-nqwhf\" (UID: \"8b2ec7b2-bb69-405a-885b-f51f518d60ce\") " pod="calico-system/calico-typha-65466bdf8-nqwhf" Jun 25 14:18:12.430898 kubelet[3283]: I0625 14:18:12.430821 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4rtq\" (UniqueName: \"kubernetes.io/projected/8b2ec7b2-bb69-405a-885b-f51f518d60ce-kube-api-access-k4rtq\") pod \"calico-typha-65466bdf8-nqwhf\" (UID: \"8b2ec7b2-bb69-405a-885b-f51f518d60ce\") " pod="calico-system/calico-typha-65466bdf8-nqwhf" Jun 25 14:18:12.477674 kubelet[3283]: I0625 14:18:12.477624 3283 topology_manager.go:215] "Topology Admit Handler" podUID="8bdf426a-845b-4d04-9cb4-36097c21bbff" podNamespace="calico-system" podName="calico-node-4cqxl" Jun 25 14:18:12.608944 containerd[1911]: time="2024-06-25T14:18:12.608857733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65466bdf8-nqwhf,Uid:8b2ec7b2-bb69-405a-885b-f51f518d60ce,Namespace:calico-system,Attempt:0,}" Jun 25 14:18:12.617312 kubelet[3283]: I0625 14:18:12.617208 3283 topology_manager.go:215] "Topology Admit Handler" podUID="cc7acd19-00be-407a-b3d7-2b1d30780fb3" podNamespace="calico-system" podName="csi-node-driver-s85fn" Jun 25 14:18:12.622005 kubelet[3283]: E0625 14:18:12.621949 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s85fn" podUID="cc7acd19-00be-407a-b3d7-2b1d30780fb3" Jun 25 14:18:12.656753 kubelet[3283]: I0625 14:18:12.655711 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/8bdf426a-845b-4d04-9cb4-36097c21bbff-cni-net-dir\") pod \"calico-node-4cqxl\" (UID: \"8bdf426a-845b-4d04-9cb4-36097c21bbff\") " pod="calico-system/calico-node-4cqxl" Jun 25 14:18:12.656753 kubelet[3283]: I0625 14:18:12.656087 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8bdf426a-845b-4d04-9cb4-36097c21bbff-var-lib-calico\") pod \"calico-node-4cqxl\" (UID: \"8bdf426a-845b-4d04-9cb4-36097c21bbff\") " pod="calico-system/calico-node-4cqxl" Jun 25 14:18:12.656753 kubelet[3283]: I0625 14:18:12.656424 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/8bdf426a-845b-4d04-9cb4-36097c21bbff-policysync\") pod \"calico-node-4cqxl\" (UID: \"8bdf426a-845b-4d04-9cb4-36097c21bbff\") " pod="calico-system/calico-node-4cqxl" Jun 25 14:18:12.656753 kubelet[3283]: I0625 14:18:12.656602 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/8bdf426a-845b-4d04-9cb4-36097c21bbff-cni-bin-dir\") pod \"calico-node-4cqxl\" (UID: \"8bdf426a-845b-4d04-9cb4-36097c21bbff\") " pod="calico-system/calico-node-4cqxl" Jun 25 14:18:12.657105 kubelet[3283]: I0625 14:18:12.656930 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8bdf426a-845b-4d04-9cb4-36097c21bbff-xtables-lock\") pod \"calico-node-4cqxl\" (UID: \"8bdf426a-845b-4d04-9cb4-36097c21bbff\") " pod="calico-system/calico-node-4cqxl" Jun 25 14:18:12.657105 kubelet[3283]: I0625 14:18:12.657022 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8bdf426a-845b-4d04-9cb4-36097c21bbff-tigera-ca-bundle\") pod \"calico-node-4cqxl\" (UID: \"8bdf426a-845b-4d04-9cb4-36097c21bbff\") " pod="calico-system/calico-node-4cqxl" Jun 25 14:18:12.657327 kubelet[3283]: I0625 14:18:12.657289 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/8bdf426a-845b-4d04-9cb4-36097c21bbff-node-certs\") pod \"calico-node-4cqxl\" (UID: \"8bdf426a-845b-4d04-9cb4-36097c21bbff\") " pod="calico-system/calico-node-4cqxl" Jun 25 14:18:12.657771 kubelet[3283]: I0625 14:18:12.657657 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8bdf426a-845b-4d04-9cb4-36097c21bbff-lib-modules\") pod \"calico-node-4cqxl\" (UID: \"8bdf426a-845b-4d04-9cb4-36097c21bbff\") " pod="calico-system/calico-node-4cqxl" Jun 25 14:18:12.658016 kubelet[3283]: I0625 14:18:12.657980 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/8bdf426a-845b-4d04-9cb4-36097c21bbff-var-run-calico\") pod \"calico-node-4cqxl\" (UID: \"8bdf426a-845b-4d04-9cb4-36097c21bbff\") " pod="calico-system/calico-node-4cqxl" Jun 25 14:18:12.658241 kubelet[3283]: I0625 14:18:12.658093 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/8bdf426a-845b-4d04-9cb4-36097c21bbff-flexvol-driver-host\") pod \"calico-node-4cqxl\" (UID: \"8bdf426a-845b-4d04-9cb4-36097c21bbff\") " pod="calico-system/calico-node-4cqxl" Jun 25 14:18:12.658435 kubelet[3283]: I0625 14:18:12.658393 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p2wf\" (UniqueName: \"kubernetes.io/projected/8bdf426a-845b-4d04-9cb4-36097c21bbff-kube-api-access-7p2wf\") pod \"calico-node-4cqxl\" (UID: \"8bdf426a-845b-4d04-9cb4-36097c21bbff\") " pod="calico-system/calico-node-4cqxl" Jun 25 14:18:12.658721 kubelet[3283]: I0625 14:18:12.658677 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/8bdf426a-845b-4d04-9cb4-36097c21bbff-cni-log-dir\") pod \"calico-node-4cqxl\" (UID: \"8bdf426a-845b-4d04-9cb4-36097c21bbff\") " pod="calico-system/calico-node-4cqxl" Jun 25 14:18:12.738308 containerd[1911]: time="2024-06-25T14:18:12.737720767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:18:12.741740 containerd[1911]: time="2024-06-25T14:18:12.738249441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:12.742125 containerd[1911]: time="2024-06-25T14:18:12.742024751Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:18:12.743047 containerd[1911]: time="2024-06-25T14:18:12.742472641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:12.760132 kubelet[3283]: I0625 14:18:12.760018 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/cc7acd19-00be-407a-b3d7-2b1d30780fb3-varrun\") pod \"csi-node-driver-s85fn\" (UID: \"cc7acd19-00be-407a-b3d7-2b1d30780fb3\") " pod="calico-system/csi-node-driver-s85fn" Jun 25 14:18:12.760289 kubelet[3283]: I0625 14:18:12.760172 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/cc7acd19-00be-407a-b3d7-2b1d30780fb3-registration-dir\") pod \"csi-node-driver-s85fn\" (UID: \"cc7acd19-00be-407a-b3d7-2b1d30780fb3\") " pod="calico-system/csi-node-driver-s85fn" Jun 25 14:18:12.760289 kubelet[3283]: I0625 14:18:12.760281 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4kgdg\" (UniqueName: \"kubernetes.io/projected/cc7acd19-00be-407a-b3d7-2b1d30780fb3-kube-api-access-4kgdg\") pod \"csi-node-driver-s85fn\" (UID: \"cc7acd19-00be-407a-b3d7-2b1d30780fb3\") " pod="calico-system/csi-node-driver-s85fn" Jun 25 14:18:12.760541 kubelet[3283]: I0625 14:18:12.760507 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/cc7acd19-00be-407a-b3d7-2b1d30780fb3-kubelet-dir\") pod \"csi-node-driver-s85fn\" (UID: \"cc7acd19-00be-407a-b3d7-2b1d30780fb3\") " pod="calico-system/csi-node-driver-s85fn" Jun 25 14:18:12.760972 kubelet[3283]: I0625 14:18:12.760932 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/cc7acd19-00be-407a-b3d7-2b1d30780fb3-socket-dir\") pod \"csi-node-driver-s85fn\" (UID: \"cc7acd19-00be-407a-b3d7-2b1d30780fb3\") " pod="calico-system/csi-node-driver-s85fn" Jun 25 14:18:12.814796 kubelet[3283]: E0625 14:18:12.814757 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.815027 kubelet[3283]: W0625 14:18:12.814995 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.815196 kubelet[3283]: E0625 14:18:12.815172 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.818870 kubelet[3283]: E0625 14:18:12.818825 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.822455 kubelet[3283]: W0625 14:18:12.819119 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.822639 kubelet[3283]: E0625 14:18:12.822488 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.862546 kubelet[3283]: E0625 14:18:12.862374 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.862546 kubelet[3283]: W0625 14:18:12.862411 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.862546 kubelet[3283]: E0625 14:18:12.862450 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.863201 kubelet[3283]: E0625 14:18:12.862970 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.863201 kubelet[3283]: W0625 14:18:12.863001 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.863201 kubelet[3283]: E0625 14:18:12.863043 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.863455 kubelet[3283]: E0625 14:18:12.863421 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.863455 kubelet[3283]: W0625 14:18:12.863448 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.863686 kubelet[3283]: E0625 14:18:12.863486 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.863890 kubelet[3283]: E0625 14:18:12.863848 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.863890 kubelet[3283]: W0625 14:18:12.863876 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.864036 kubelet[3283]: E0625 14:18:12.863912 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.864413 kubelet[3283]: E0625 14:18:12.864371 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.864413 kubelet[3283]: W0625 14:18:12.864405 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.864749 kubelet[3283]: E0625 14:18:12.864578 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.864887 kubelet[3283]: E0625 14:18:12.864852 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.864887 kubelet[3283]: W0625 14:18:12.864881 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.865065 kubelet[3283]: E0625 14:18:12.864914 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.865422 kubelet[3283]: E0625 14:18:12.865375 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.865422 kubelet[3283]: W0625 14:18:12.865405 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.865671 kubelet[3283]: E0625 14:18:12.865453 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.865921 kubelet[3283]: E0625 14:18:12.865886 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.865921 kubelet[3283]: W0625 14:18:12.865915 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.866066 kubelet[3283]: E0625 14:18:12.865951 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.866793 kubelet[3283]: E0625 14:18:12.866751 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.866793 kubelet[3283]: W0625 14:18:12.866785 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.867060 kubelet[3283]: E0625 14:18:12.866829 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.867490 kubelet[3283]: E0625 14:18:12.867436 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.867602 kubelet[3283]: W0625 14:18:12.867487 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.874737 kubelet[3283]: E0625 14:18:12.867780 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.874737 kubelet[3283]: E0625 14:18:12.872786 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.874737 kubelet[3283]: W0625 14:18:12.872814 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.874737 kubelet[3283]: E0625 14:18:12.873016 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.877696 kubelet[3283]: E0625 14:18:12.877636 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.877696 kubelet[3283]: W0625 14:18:12.877677 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.877982 kubelet[3283]: E0625 14:18:12.877906 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.878253 kubelet[3283]: E0625 14:18:12.878207 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.878253 kubelet[3283]: W0625 14:18:12.878237 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.878579 kubelet[3283]: E0625 14:18:12.878439 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.878851 kubelet[3283]: E0625 14:18:12.878815 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.878851 kubelet[3283]: W0625 14:18:12.878845 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.879117 kubelet[3283]: E0625 14:18:12.879012 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.879593 kubelet[3283]: E0625 14:18:12.879272 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.879593 kubelet[3283]: W0625 14:18:12.879310 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.879593 kubelet[3283]: E0625 14:18:12.879490 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.880656 kubelet[3283]: E0625 14:18:12.879968 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.880656 kubelet[3283]: W0625 14:18:12.880009 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.880656 kubelet[3283]: E0625 14:18:12.880197 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.880912 kubelet[3283]: E0625 14:18:12.880720 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.880912 kubelet[3283]: W0625 14:18:12.880741 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.881040 kubelet[3283]: E0625 14:18:12.880940 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.881490 kubelet[3283]: E0625 14:18:12.881165 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.881490 kubelet[3283]: W0625 14:18:12.881194 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.881490 kubelet[3283]: E0625 14:18:12.881351 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.881756 kubelet[3283]: E0625 14:18:12.881530 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.881756 kubelet[3283]: W0625 14:18:12.881546 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.881883 kubelet[3283]: E0625 14:18:12.881797 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.882678 kubelet[3283]: E0625 14:18:12.882315 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.882678 kubelet[3283]: W0625 14:18:12.882446 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.883702 kubelet[3283]: E0625 14:18:12.883011 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.884820 kubelet[3283]: E0625 14:18:12.884131 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.884820 kubelet[3283]: W0625 14:18:12.884260 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.884820 kubelet[3283]: E0625 14:18:12.884457 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.894820 kubelet[3283]: E0625 14:18:12.894768 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.894820 kubelet[3283]: W0625 14:18:12.894811 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.895280 kubelet[3283]: E0625 14:18:12.895053 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.895514 kubelet[3283]: E0625 14:18:12.895472 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.895514 kubelet[3283]: W0625 14:18:12.895504 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.896035 kubelet[3283]: E0625 14:18:12.895758 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.902080 kubelet[3283]: E0625 14:18:12.901798 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.902080 kubelet[3283]: W0625 14:18:12.901830 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.903722 kubelet[3283]: E0625 14:18:12.903672 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.904106 kubelet[3283]: E0625 14:18:12.904015 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.904106 kubelet[3283]: W0625 14:18:12.904038 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.904106 kubelet[3283]: E0625 14:18:12.904068 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.921394 kubelet[3283]: E0625 14:18:12.921359 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:12.921964 kubelet[3283]: W0625 14:18:12.921924 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:12.922147 kubelet[3283]: E0625 14:18:12.922124 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:12.955899 containerd[1911]: time="2024-06-25T14:18:12.955817135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-65466bdf8-nqwhf,Uid:8b2ec7b2-bb69-405a-885b-f51f518d60ce,Namespace:calico-system,Attempt:0,} returns sandbox id \"ba16838ba033b859f7d8d1f7acb94ee78f288c3b75663c206fa260a94f9df6db\"" Jun 25 14:18:12.960359 containerd[1911]: time="2024-06-25T14:18:12.960295731Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Jun 25 14:18:13.090366 containerd[1911]: time="2024-06-25T14:18:13.090211333Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4cqxl,Uid:8bdf426a-845b-4d04-9cb4-36097c21bbff,Namespace:calico-system,Attempt:0,}" Jun 25 14:18:13.140009 containerd[1911]: time="2024-06-25T14:18:13.134569156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:18:13.140009 containerd[1911]: time="2024-06-25T14:18:13.134867297Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:13.140009 containerd[1911]: time="2024-06-25T14:18:13.134934882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:18:13.140009 containerd[1911]: time="2024-06-25T14:18:13.134973090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:13.212000 audit[3755]: NETFILTER_CFG table=filter:93 family=2 entries=16 op=nft_register_rule pid=3755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:13.212000 audit[3755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffd73b6690 a2=0 a3=1 items=0 ppid=3461 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:13.212000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:13.214000 audit[3755]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=3755 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:13.214000 audit[3755]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd73b6690 a2=0 a3=1 items=0 ppid=3461 pid=3755 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:13.214000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:13.274930 containerd[1911]: time="2024-06-25T14:18:13.274801565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4cqxl,Uid:8bdf426a-845b-4d04-9cb4-36097c21bbff,Namespace:calico-system,Attempt:0,} returns sandbox id \"47162c4697c227b87a6229e614223922badc2f6d10acb8db3079002c53dd8b25\"" Jun 25 14:18:14.543161 kubelet[3283]: E0625 14:18:14.542365 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s85fn" podUID="cc7acd19-00be-407a-b3d7-2b1d30780fb3" Jun 25 14:18:15.337716 containerd[1911]: time="2024-06-25T14:18:15.337630762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:15.339296 containerd[1911]: time="2024-06-25T14:18:15.339205071Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=27476513" Jun 25 14:18:15.340988 containerd[1911]: time="2024-06-25T14:18:15.340876149Z" level=info msg="ImageCreate event name:\"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:15.345500 containerd[1911]: time="2024-06-25T14:18:15.345416317Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:15.349603 containerd[1911]: time="2024-06-25T14:18:15.349548231Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:15.351443 containerd[1911]: time="2024-06-25T14:18:15.351360790Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"28843073\" in 2.390792262s" Jun 25 14:18:15.351443 containerd[1911]: time="2024-06-25T14:18:15.351435550Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:2551880d36cd0ce4c6820747ffe4c40cbf344d26df0ecd878808432ad4f78f03\"" Jun 25 14:18:15.372845 containerd[1911]: time="2024-06-25T14:18:15.372777385Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Jun 25 14:18:15.390153 containerd[1911]: time="2024-06-25T14:18:15.390072806Z" level=info msg="CreateContainer within sandbox \"ba16838ba033b859f7d8d1f7acb94ee78f288c3b75663c206fa260a94f9df6db\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jun 25 14:18:15.426807 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1571213291.mount: Deactivated successfully. Jun 25 14:18:15.435967 containerd[1911]: time="2024-06-25T14:18:15.435901878Z" level=info msg="CreateContainer within sandbox \"ba16838ba033b859f7d8d1f7acb94ee78f288c3b75663c206fa260a94f9df6db\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6187be0e0fa2eb47e339cc5a320f64804734558d3097a2971983a495fc4a10d1\"" Jun 25 14:18:15.442533 containerd[1911]: time="2024-06-25T14:18:15.442429505Z" level=info msg="StartContainer for \"6187be0e0fa2eb47e339cc5a320f64804734558d3097a2971983a495fc4a10d1\"" Jun 25 14:18:15.599549 containerd[1911]: time="2024-06-25T14:18:15.599403748Z" level=info msg="StartContainer for \"6187be0e0fa2eb47e339cc5a320f64804734558d3097a2971983a495fc4a10d1\" returns successfully" Jun 25 14:18:15.785487 kubelet[3283]: E0625 14:18:15.785441 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.786185 kubelet[3283]: W0625 14:18:15.786150 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.786340 kubelet[3283]: E0625 14:18:15.786305 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.788831 kubelet[3283]: E0625 14:18:15.788792 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.789035 kubelet[3283]: W0625 14:18:15.789006 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.789160 kubelet[3283]: E0625 14:18:15.789139 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.790309 kubelet[3283]: E0625 14:18:15.790276 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.790554 kubelet[3283]: W0625 14:18:15.790525 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.790727 kubelet[3283]: E0625 14:18:15.790704 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.792431 kubelet[3283]: E0625 14:18:15.792393 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.792666 kubelet[3283]: W0625 14:18:15.792635 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.792816 kubelet[3283]: E0625 14:18:15.792793 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.796817 kubelet[3283]: E0625 14:18:15.796774 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.799844 kubelet[3283]: W0625 14:18:15.799781 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.800065 kubelet[3283]: E0625 14:18:15.800041 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.804922 kubelet[3283]: E0625 14:18:15.804883 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.805120 kubelet[3283]: W0625 14:18:15.805092 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.805258 kubelet[3283]: E0625 14:18:15.805236 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.808328 kubelet[3283]: E0625 14:18:15.808260 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.808694 kubelet[3283]: W0625 14:18:15.808664 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.808843 kubelet[3283]: E0625 14:18:15.808821 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.809487 kubelet[3283]: E0625 14:18:15.809460 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.809682 kubelet[3283]: W0625 14:18:15.809656 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.809808 kubelet[3283]: E0625 14:18:15.809786 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.812877 kubelet[3283]: E0625 14:18:15.812829 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.813180 kubelet[3283]: W0625 14:18:15.813132 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.813316 kubelet[3283]: E0625 14:18:15.813294 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.814607 kubelet[3283]: E0625 14:18:15.814569 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.814848 kubelet[3283]: W0625 14:18:15.814816 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.815025 kubelet[3283]: E0625 14:18:15.815001 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.821637 kubelet[3283]: E0625 14:18:15.821579 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.821878 kubelet[3283]: W0625 14:18:15.821848 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.822010 kubelet[3283]: E0625 14:18:15.821987 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.824967 kubelet[3283]: E0625 14:18:15.824918 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.825225 kubelet[3283]: W0625 14:18:15.825194 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.825358 kubelet[3283]: E0625 14:18:15.825336 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.833855 kubelet[3283]: E0625 14:18:15.833818 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.834097 kubelet[3283]: W0625 14:18:15.834069 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.834223 kubelet[3283]: E0625 14:18:15.834202 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.835825 kubelet[3283]: E0625 14:18:15.835785 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.836037 kubelet[3283]: W0625 14:18:15.836008 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.836188 kubelet[3283]: E0625 14:18:15.836167 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.838900 kubelet[3283]: E0625 14:18:15.838864 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.839166 kubelet[3283]: W0625 14:18:15.839120 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.839313 kubelet[3283]: E0625 14:18:15.839292 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.845945 kubelet[3283]: E0625 14:18:15.845910 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.846159 kubelet[3283]: W0625 14:18:15.846132 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.846306 kubelet[3283]: E0625 14:18:15.846284 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.847445 kubelet[3283]: E0625 14:18:15.847395 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.847690 kubelet[3283]: W0625 14:18:15.847661 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.847850 kubelet[3283]: E0625 14:18:15.847827 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.851763 kubelet[3283]: E0625 14:18:15.848910 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.851763 kubelet[3283]: W0625 14:18:15.848945 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.851763 kubelet[3283]: E0625 14:18:15.848983 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.851763 kubelet[3283]: E0625 14:18:15.849358 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.851763 kubelet[3283]: W0625 14:18:15.849373 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.851763 kubelet[3283]: E0625 14:18:15.849397 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.851763 kubelet[3283]: E0625 14:18:15.849694 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.851763 kubelet[3283]: W0625 14:18:15.849709 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.851763 kubelet[3283]: E0625 14:18:15.849732 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.851763 kubelet[3283]: E0625 14:18:15.851692 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.851763 kubelet[3283]: W0625 14:18:15.851718 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.856171 kubelet[3283]: E0625 14:18:15.854136 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.856171 kubelet[3283]: E0625 14:18:15.854661 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.856171 kubelet[3283]: W0625 14:18:15.854684 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.856171 kubelet[3283]: E0625 14:18:15.854714 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.856171 kubelet[3283]: E0625 14:18:15.855602 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.856171 kubelet[3283]: W0625 14:18:15.855663 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.856171 kubelet[3283]: E0625 14:18:15.855696 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.856171 kubelet[3283]: E0625 14:18:15.856031 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.856171 kubelet[3283]: W0625 14:18:15.856047 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.856171 kubelet[3283]: E0625 14:18:15.856072 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.856756 kubelet[3283]: E0625 14:18:15.856344 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.856756 kubelet[3283]: W0625 14:18:15.856359 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.856756 kubelet[3283]: E0625 14:18:15.856383 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.856756 kubelet[3283]: E0625 14:18:15.856667 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.856756 kubelet[3283]: W0625 14:18:15.856683 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.856756 kubelet[3283]: E0625 14:18:15.856706 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.857066 kubelet[3283]: E0625 14:18:15.857000 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.857066 kubelet[3283]: W0625 14:18:15.857015 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.857066 kubelet[3283]: E0625 14:18:15.857040 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.861760 kubelet[3283]: E0625 14:18:15.858103 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.861760 kubelet[3283]: W0625 14:18:15.858138 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.861760 kubelet[3283]: E0625 14:18:15.858175 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.861760 kubelet[3283]: E0625 14:18:15.858577 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.861760 kubelet[3283]: W0625 14:18:15.858595 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.861760 kubelet[3283]: E0625 14:18:15.858682 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.861760 kubelet[3283]: E0625 14:18:15.859108 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.861760 kubelet[3283]: W0625 14:18:15.859142 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.861760 kubelet[3283]: E0625 14:18:15.859179 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.861760 kubelet[3283]: E0625 14:18:15.859571 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.861760 kubelet[3283]: W0625 14:18:15.859592 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.861760 kubelet[3283]: E0625 14:18:15.859648 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.861760 kubelet[3283]: E0625 14:18:15.860105 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.861760 kubelet[3283]: W0625 14:18:15.860127 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.861760 kubelet[3283]: E0625 14:18:15.860156 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:15.862773 kubelet[3283]: E0625 14:18:15.862166 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:15.862773 kubelet[3283]: W0625 14:18:15.862192 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:15.862773 kubelet[3283]: E0625 14:18:15.862227 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.543638 kubelet[3283]: E0625 14:18:16.542483 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s85fn" podUID="cc7acd19-00be-407a-b3d7-2b1d30780fb3" Jun 25 14:18:16.745352 containerd[1911]: time="2024-06-25T14:18:16.745290365Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:16.746171 containerd[1911]: time="2024-06-25T14:18:16.746106812Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=4916009" Jun 25 14:18:16.747863 containerd[1911]: time="2024-06-25T14:18:16.747809317Z" level=info msg="ImageCreate event name:\"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:16.754110 containerd[1911]: time="2024-06-25T14:18:16.753279452Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:16.755864 containerd[1911]: time="2024-06-25T14:18:16.755809277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:16.757855 containerd[1911]: time="2024-06-25T14:18:16.757790892Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6282537\" in 1.384938387s" Jun 25 14:18:16.758176 containerd[1911]: time="2024-06-25T14:18:16.758134021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:4b6a6a9b369fa6127e23e376ac423670fa81290e0860917acaacae108e3cc064\"" Jun 25 14:18:16.763949 containerd[1911]: time="2024-06-25T14:18:16.763162279Z" level=info msg="CreateContainer within sandbox \"47162c4697c227b87a6229e614223922badc2f6d10acb8db3079002c53dd8b25\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jun 25 14:18:16.795264 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount828139615.mount: Deactivated successfully. Jun 25 14:18:16.804549 containerd[1911]: time="2024-06-25T14:18:16.804432558Z" level=info msg="CreateContainer within sandbox \"47162c4697c227b87a6229e614223922badc2f6d10acb8db3079002c53dd8b25\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"de81d8ab989c69c79bdf7c8dedd7f2f8723afa001ab5dc41a3aada6b7eef2eae\"" Jun 25 14:18:16.807965 containerd[1911]: time="2024-06-25T14:18:16.807895242Z" level=info msg="StartContainer for \"de81d8ab989c69c79bdf7c8dedd7f2f8723afa001ab5dc41a3aada6b7eef2eae\"" Jun 25 14:18:16.850411 kubelet[3283]: E0625 14:18:16.849851 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.850411 kubelet[3283]: W0625 14:18:16.849915 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.850411 kubelet[3283]: E0625 14:18:16.849986 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.851885 kubelet[3283]: E0625 14:18:16.851449 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.851885 kubelet[3283]: W0625 14:18:16.851477 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.851885 kubelet[3283]: E0625 14:18:16.851513 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.852570 kubelet[3283]: E0625 14:18:16.852179 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.852570 kubelet[3283]: W0625 14:18:16.852215 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.852570 kubelet[3283]: E0625 14:18:16.852244 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.853264 kubelet[3283]: E0625 14:18:16.852898 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.853264 kubelet[3283]: W0625 14:18:16.852935 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.853264 kubelet[3283]: E0625 14:18:16.852963 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.853880 kubelet[3283]: E0625 14:18:16.853531 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.853880 kubelet[3283]: W0625 14:18:16.853563 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.853880 kubelet[3283]: E0625 14:18:16.853590 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.854560 kubelet[3283]: E0625 14:18:16.854159 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.854560 kubelet[3283]: W0625 14:18:16.854194 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.854560 kubelet[3283]: E0625 14:18:16.854226 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.855398 kubelet[3283]: E0625 14:18:16.854893 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.855398 kubelet[3283]: W0625 14:18:16.854919 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.855398 kubelet[3283]: E0625 14:18:16.854951 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.856260 kubelet[3283]: E0625 14:18:16.856023 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.856260 kubelet[3283]: W0625 14:18:16.856049 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.856260 kubelet[3283]: E0625 14:18:16.856086 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.857250 kubelet[3283]: E0625 14:18:16.857223 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.857445 kubelet[3283]: W0625 14:18:16.857421 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.857586 kubelet[3283]: E0625 14:18:16.857565 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.858137 kubelet[3283]: E0625 14:18:16.858117 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.858261 kubelet[3283]: W0625 14:18:16.858238 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.858396 kubelet[3283]: E0625 14:18:16.858376 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.858866 kubelet[3283]: E0625 14:18:16.858835 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.859065 kubelet[3283]: W0625 14:18:16.859040 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.859209 kubelet[3283]: E0625 14:18:16.859187 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.863221 kubelet[3283]: E0625 14:18:16.863187 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.863438 kubelet[3283]: W0625 14:18:16.863410 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.863688 kubelet[3283]: E0625 14:18:16.863606 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.865379 kubelet[3283]: E0625 14:18:16.865343 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.865706 kubelet[3283]: W0625 14:18:16.865674 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.865935 kubelet[3283]: E0625 14:18:16.865889 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.868794 kubelet[3283]: E0625 14:18:16.868761 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.872495 kubelet[3283]: W0625 14:18:16.872435 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.872756 kubelet[3283]: E0625 14:18:16.872733 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.874340 kubelet[3283]: E0625 14:18:16.874303 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.874589 kubelet[3283]: W0625 14:18:16.874559 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.874768 kubelet[3283]: E0625 14:18:16.874745 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.875649 kubelet[3283]: E0625 14:18:16.875590 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.877697 kubelet[3283]: W0625 14:18:16.875860 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.877974 kubelet[3283]: E0625 14:18:16.877935 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.882305 kubelet[3283]: E0625 14:18:16.882270 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.882740 kubelet[3283]: W0625 14:18:16.882704 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.882930 kubelet[3283]: E0625 14:18:16.882907 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.884085 kubelet[3283]: E0625 14:18:16.884051 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.884291 kubelet[3283]: W0625 14:18:16.884264 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.884479 kubelet[3283]: E0625 14:18:16.884455 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.885094 kubelet[3283]: E0625 14:18:16.885067 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.885347 kubelet[3283]: W0625 14:18:16.885320 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.885510 kubelet[3283]: E0625 14:18:16.885476 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.894048 kubelet[3283]: E0625 14:18:16.894007 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.894279 kubelet[3283]: W0625 14:18:16.894245 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.894445 kubelet[3283]: E0625 14:18:16.894423 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.899359 kubelet[3283]: I0625 14:18:16.899313 3283 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-65466bdf8-nqwhf" podStartSLOduration=2.506140468 podCreationTimestamp="2024-06-25 14:18:12 +0000 UTC" firstStartedPulling="2024-06-25 14:18:12.95890789 +0000 UTC m=+36.729995318" lastFinishedPulling="2024-06-25 14:18:15.352024548 +0000 UTC m=+39.123111988" observedRunningTime="2024-06-25 14:18:15.797829136 +0000 UTC m=+39.568916600" watchObservedRunningTime="2024-06-25 14:18:16.899257138 +0000 UTC m=+40.670344590" Jun 25 14:18:16.900754 kubelet[3283]: E0625 14:18:16.900719 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.906807 kubelet[3283]: W0625 14:18:16.906760 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.907044 kubelet[3283]: E0625 14:18:16.907019 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.911794 kubelet[3283]: E0625 14:18:16.911745 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.912020 kubelet[3283]: W0625 14:18:16.911991 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.912196 kubelet[3283]: E0625 14:18:16.912173 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.912742 kubelet[3283]: E0625 14:18:16.912718 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.913224 kubelet[3283]: W0625 14:18:16.913156 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.913459 kubelet[3283]: E0625 14:18:16.913432 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.914280 kubelet[3283]: E0625 14:18:16.914248 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.914495 kubelet[3283]: W0625 14:18:16.914460 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.915805 kubelet[3283]: E0625 14:18:16.914720 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.921049 kubelet[3283]: E0625 14:18:16.921018 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.923775 kubelet[3283]: W0625 14:18:16.923723 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.924031 kubelet[3283]: E0625 14:18:16.924008 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.924741 kubelet[3283]: E0625 14:18:16.924709 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.924992 kubelet[3283]: W0625 14:18:16.924964 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.925207 kubelet[3283]: E0625 14:18:16.925184 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.926049 kubelet[3283]: E0625 14:18:16.926019 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.926221 kubelet[3283]: W0625 14:18:16.926195 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.926345 kubelet[3283]: E0625 14:18:16.926324 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.927212 kubelet[3283]: E0625 14:18:16.927181 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.927390 kubelet[3283]: W0625 14:18:16.927362 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.942336 kubelet[3283]: E0625 14:18:16.942304 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.959931 kubelet[3283]: E0625 14:18:16.959898 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.960141 kubelet[3283]: W0625 14:18:16.960114 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.960488 kubelet[3283]: E0625 14:18:16.960466 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.966582 kubelet[3283]: E0625 14:18:16.966543 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.966875 kubelet[3283]: W0625 14:18:16.966842 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.967215 kubelet[3283]: E0625 14:18:16.967187 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.968962 kubelet[3283]: E0625 14:18:16.968929 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.969178 kubelet[3283]: W0625 14:18:16.969148 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.969382 kubelet[3283]: E0625 14:18:16.969346 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.970158 kubelet[3283]: E0625 14:18:16.970123 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.970372 kubelet[3283]: W0625 14:18:16.970340 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.970563 kubelet[3283]: E0625 14:18:16.970541 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.976736 kubelet[3283]: E0625 14:18:16.976686 3283 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jun 25 14:18:16.977020 kubelet[3283]: W0625 14:18:16.976987 3283 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jun 25 14:18:16.977191 kubelet[3283]: E0625 14:18:16.977170 3283 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jun 25 14:18:16.988000 audit[3908]: NETFILTER_CFG table=filter:95 family=2 entries=15 op=nft_register_rule pid=3908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:16.988000 audit[3908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffc3cb7780 a2=0 a3=1 items=0 ppid=3461 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:16.988000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:16.990000 audit[3908]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3908 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:16.990000 audit[3908]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc3cb7780 a2=0 a3=1 items=0 ppid=3461 pid=3908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:16.990000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:17.086896 containerd[1911]: time="2024-06-25T14:18:17.086703113Z" level=info msg="StartContainer for \"de81d8ab989c69c79bdf7c8dedd7f2f8723afa001ab5dc41a3aada6b7eef2eae\" returns successfully" Jun 25 14:18:17.361329 systemd[1]: run-containerd-runc-k8s.io-de81d8ab989c69c79bdf7c8dedd7f2f8723afa001ab5dc41a3aada6b7eef2eae-runc.ipSaWL.mount: Deactivated successfully. Jun 25 14:18:17.362195 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de81d8ab989c69c79bdf7c8dedd7f2f8723afa001ab5dc41a3aada6b7eef2eae-rootfs.mount: Deactivated successfully. Jun 25 14:18:17.567096 containerd[1911]: time="2024-06-25T14:18:17.567020888Z" level=info msg="shim disconnected" id=de81d8ab989c69c79bdf7c8dedd7f2f8723afa001ab5dc41a3aada6b7eef2eae namespace=k8s.io Jun 25 14:18:17.567485 containerd[1911]: time="2024-06-25T14:18:17.567444801Z" level=warning msg="cleaning up after shim disconnected" id=de81d8ab989c69c79bdf7c8dedd7f2f8723afa001ab5dc41a3aada6b7eef2eae namespace=k8s.io Jun 25 14:18:17.567684 containerd[1911]: time="2024-06-25T14:18:17.567651526Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:18:17.799762 containerd[1911]: time="2024-06-25T14:18:17.799704133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Jun 25 14:18:18.543727 kubelet[3283]: E0625 14:18:18.543690 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s85fn" podUID="cc7acd19-00be-407a-b3d7-2b1d30780fb3" Jun 25 14:18:18.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.41:22-139.178.68.195:39958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:18.584336 systemd[1]: Started sshd@7-172.31.29.41:22-139.178.68.195:39958.service - OpenSSH per-connection server daemon (139.178.68.195:39958). Jun 25 14:18:18.585855 kernel: kauditd_printk_skb: 14 callbacks suppressed Jun 25 14:18:18.585928 kernel: audit: type=1130 audit(1719325098.584:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.41:22-139.178.68.195:39958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:18.768000 audit[3957]: USER_ACCT pid=3957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.769537 sshd[3957]: Accepted publickey for core from 139.178.68.195 port 39958 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:18.773651 kernel: audit: type=1101 audit(1719325098.768:278): pid=3957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.775000 audit[3957]: CRED_ACQ pid=3957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.777114 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:18.783502 kernel: audit: type=1103 audit(1719325098.775:279): pid=3957 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.783787 kernel: audit: type=1006 audit(1719325098.775:280): pid=3957 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Jun 25 14:18:18.783897 kernel: audit: type=1300 audit(1719325098.775:280): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff94b23d0 a2=3 a3=1 items=0 ppid=1 pid=3957 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:18.775000 audit[3957]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff94b23d0 a2=3 a3=1 items=0 ppid=1 pid=3957 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:18.775000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:18.790351 kernel: audit: type=1327 audit(1719325098.775:280): proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:18.798918 systemd-logind[1895]: New session 8 of user core. Jun 25 14:18:18.804234 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 25 14:18:18.848000 audit[3957]: USER_START pid=3957 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.855387 kernel: audit: type=1105 audit(1719325098.848:281): pid=3957 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.854000 audit[3960]: CRED_ACQ pid=3960 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:18.876641 kernel: audit: type=1103 audit(1719325098.854:282): pid=3960 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:19.355153 sshd[3957]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:19.357000 audit[3957]: USER_END pid=3957 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:19.357000 audit[3957]: CRED_DISP pid=3957 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:19.362443 systemd[1]: sshd@7-172.31.29.41:22-139.178.68.195:39958.service: Deactivated successfully. Jun 25 14:18:19.364033 systemd[1]: session-8.scope: Deactivated successfully. Jun 25 14:18:19.366489 systemd-logind[1895]: Session 8 logged out. Waiting for processes to exit. Jun 25 14:18:19.367883 kernel: audit: type=1106 audit(1719325099.357:283): pid=3957 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:19.368131 kernel: audit: type=1104 audit(1719325099.357:284): pid=3957 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:19.369647 systemd-logind[1895]: Removed session 8. Jun 25 14:18:19.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-172.31.29.41:22-139.178.68.195:39958 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:20.542682 kubelet[3283]: E0625 14:18:20.542111 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s85fn" podUID="cc7acd19-00be-407a-b3d7-2b1d30780fb3" Jun 25 14:18:22.293131 containerd[1911]: time="2024-06-25T14:18:22.291189848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:22.296337 containerd[1911]: time="2024-06-25T14:18:22.295933176Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=86799715" Jun 25 14:18:22.298917 containerd[1911]: time="2024-06-25T14:18:22.298839525Z" level=info msg="ImageCreate event name:\"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:22.303492 containerd[1911]: time="2024-06-25T14:18:22.303414600Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:22.305384 containerd[1911]: time="2024-06-25T14:18:22.305316558Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:22.307852 containerd[1911]: time="2024-06-25T14:18:22.307702130Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"88166283\" in 4.506938693s" Jun 25 14:18:22.308190 containerd[1911]: time="2024-06-25T14:18:22.308130687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:adcb19ea66141abcd7dc426e3205f2e6ff26e524a3f7148c97f3d49933f502ee\"" Jun 25 14:18:22.313954 containerd[1911]: time="2024-06-25T14:18:22.313888942Z" level=info msg="CreateContainer within sandbox \"47162c4697c227b87a6229e614223922badc2f6d10acb8db3079002c53dd8b25\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jun 25 14:18:22.344770 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3443136003.mount: Deactivated successfully. Jun 25 14:18:22.346095 containerd[1911]: time="2024-06-25T14:18:22.346006219Z" level=info msg="CreateContainer within sandbox \"47162c4697c227b87a6229e614223922badc2f6d10acb8db3079002c53dd8b25\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a415a3345ba013e5ae24835a7934e56f124ffd6e1589b45721fdfe37d24d576a\"" Jun 25 14:18:22.348113 containerd[1911]: time="2024-06-25T14:18:22.348051890Z" level=info msg="StartContainer for \"a415a3345ba013e5ae24835a7934e56f124ffd6e1589b45721fdfe37d24d576a\"" Jun 25 14:18:22.466440 containerd[1911]: time="2024-06-25T14:18:22.466333637Z" level=info msg="StartContainer for \"a415a3345ba013e5ae24835a7934e56f124ffd6e1589b45721fdfe37d24d576a\" returns successfully" Jun 25 14:18:22.541982 kubelet[3283]: E0625 14:18:22.541937 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s85fn" podUID="cc7acd19-00be-407a-b3d7-2b1d30780fb3" Jun 25 14:18:23.460166 containerd[1911]: time="2024-06-25T14:18:23.460080478Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 25 14:18:23.509166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a415a3345ba013e5ae24835a7934e56f124ffd6e1589b45721fdfe37d24d576a-rootfs.mount: Deactivated successfully. Jun 25 14:18:23.569547 kubelet[3283]: I0625 14:18:23.569269 3283 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Jun 25 14:18:23.614680 kubelet[3283]: I0625 14:18:23.614140 3283 topology_manager.go:215] "Topology Admit Handler" podUID="81a99eed-b323-4056-acb1-e2466297b4af" podNamespace="kube-system" podName="coredns-5dd5756b68-rcct9" Jun 25 14:18:23.624673 kubelet[3283]: I0625 14:18:23.620809 3283 topology_manager.go:215] "Topology Admit Handler" podUID="465b00ac-d2f9-4d4f-8724-a625ed37de19" podNamespace="kube-system" podName="coredns-5dd5756b68-47ngv" Jun 25 14:18:23.652200 kubelet[3283]: I0625 14:18:23.652141 3283 topology_manager.go:215] "Topology Admit Handler" podUID="4d76b6a7-2dd0-4867-abc7-c8bd529a7e66" podNamespace="calico-system" podName="calico-kube-controllers-84fbd4855c-kghkg" Jun 25 14:18:23.656717 kubelet[3283]: I0625 14:18:23.656668 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvhf8\" (UniqueName: \"kubernetes.io/projected/465b00ac-d2f9-4d4f-8724-a625ed37de19-kube-api-access-nvhf8\") pod \"coredns-5dd5756b68-47ngv\" (UID: \"465b00ac-d2f9-4d4f-8724-a625ed37de19\") " pod="kube-system/coredns-5dd5756b68-47ngv" Jun 25 14:18:23.656992 kubelet[3283]: I0625 14:18:23.656968 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/81a99eed-b323-4056-acb1-e2466297b4af-config-volume\") pod \"coredns-5dd5756b68-rcct9\" (UID: \"81a99eed-b323-4056-acb1-e2466297b4af\") " pod="kube-system/coredns-5dd5756b68-rcct9" Jun 25 14:18:23.657216 kubelet[3283]: I0625 14:18:23.657177 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/465b00ac-d2f9-4d4f-8724-a625ed37de19-config-volume\") pod \"coredns-5dd5756b68-47ngv\" (UID: \"465b00ac-d2f9-4d4f-8724-a625ed37de19\") " pod="kube-system/coredns-5dd5756b68-47ngv" Jun 25 14:18:23.657307 kubelet[3283]: I0625 14:18:23.657253 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24xvg\" (UniqueName: \"kubernetes.io/projected/81a99eed-b323-4056-acb1-e2466297b4af-kube-api-access-24xvg\") pod \"coredns-5dd5756b68-rcct9\" (UID: \"81a99eed-b323-4056-acb1-e2466297b4af\") " pod="kube-system/coredns-5dd5756b68-rcct9" Jun 25 14:18:23.761116 kubelet[3283]: I0625 14:18:23.760343 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d76b6a7-2dd0-4867-abc7-c8bd529a7e66-tigera-ca-bundle\") pod \"calico-kube-controllers-84fbd4855c-kghkg\" (UID: \"4d76b6a7-2dd0-4867-abc7-c8bd529a7e66\") " pod="calico-system/calico-kube-controllers-84fbd4855c-kghkg" Jun 25 14:18:23.761503 kubelet[3283]: I0625 14:18:23.761404 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6kdb\" (UniqueName: \"kubernetes.io/projected/4d76b6a7-2dd0-4867-abc7-c8bd529a7e66-kube-api-access-b6kdb\") pod \"calico-kube-controllers-84fbd4855c-kghkg\" (UID: \"4d76b6a7-2dd0-4867-abc7-c8bd529a7e66\") " pod="calico-system/calico-kube-controllers-84fbd4855c-kghkg" Jun 25 14:18:23.930524 containerd[1911]: time="2024-06-25T14:18:23.929800642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-47ngv,Uid:465b00ac-d2f9-4d4f-8724-a625ed37de19,Namespace:kube-system,Attempt:0,}" Jun 25 14:18:23.931666 containerd[1911]: time="2024-06-25T14:18:23.931558323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rcct9,Uid:81a99eed-b323-4056-acb1-e2466297b4af,Namespace:kube-system,Attempt:0,}" Jun 25 14:18:23.977995 containerd[1911]: time="2024-06-25T14:18:23.977458508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fbd4855c-kghkg,Uid:4d76b6a7-2dd0-4867-abc7-c8bd529a7e66,Namespace:calico-system,Attempt:0,}" Jun 25 14:18:24.287223 containerd[1911]: time="2024-06-25T14:18:24.287130660Z" level=info msg="shim disconnected" id=a415a3345ba013e5ae24835a7934e56f124ffd6e1589b45721fdfe37d24d576a namespace=k8s.io Jun 25 14:18:24.287223 containerd[1911]: time="2024-06-25T14:18:24.287220718Z" level=warning msg="cleaning up after shim disconnected" id=a415a3345ba013e5ae24835a7934e56f124ffd6e1589b45721fdfe37d24d576a namespace=k8s.io Jun 25 14:18:24.287627 containerd[1911]: time="2024-06-25T14:18:24.287243902Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 25 14:18:24.388654 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:18:24.388850 kernel: audit: type=1130 audit(1719325104.384:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.41:22-139.178.68.195:39962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:24.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.41:22-139.178.68.195:39962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:24.384380 systemd[1]: Started sshd@8-172.31.29.41:22-139.178.68.195:39962.service - OpenSSH per-connection server daemon (139.178.68.195:39962). Jun 25 14:18:24.549169 containerd[1911]: time="2024-06-25T14:18:24.540551513Z" level=error msg="Failed to destroy network for sandbox \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.545425 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093-shm.mount: Deactivated successfully. Jun 25 14:18:24.553549 containerd[1911]: time="2024-06-25T14:18:24.553460164Z" level=error msg="encountered an error cleaning up failed sandbox \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.555445 containerd[1911]: time="2024-06-25T14:18:24.553579274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fbd4855c-kghkg,Uid:4d76b6a7-2dd0-4867-abc7-c8bd529a7e66,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.557874 containerd[1911]: time="2024-06-25T14:18:24.557814818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s85fn,Uid:cc7acd19-00be-407a-b3d7-2b1d30780fb3,Namespace:calico-system,Attempt:0,}" Jun 25 14:18:24.558832 kubelet[3283]: E0625 14:18:24.557426 3283 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.559192 kubelet[3283]: E0625 14:18:24.559030 3283 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84fbd4855c-kghkg" Jun 25 14:18:24.559192 kubelet[3283]: E0625 14:18:24.559092 3283 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-84fbd4855c-kghkg" Jun 25 14:18:24.559350 kubelet[3283]: E0625 14:18:24.559202 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-84fbd4855c-kghkg_calico-system(4d76b6a7-2dd0-4867-abc7-c8bd529a7e66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-84fbd4855c-kghkg_calico-system(4d76b6a7-2dd0-4867-abc7-c8bd529a7e66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84fbd4855c-kghkg" podUID="4d76b6a7-2dd0-4867-abc7-c8bd529a7e66" Jun 25 14:18:24.567273 containerd[1911]: time="2024-06-25T14:18:24.567175873Z" level=error msg="Failed to destroy network for sandbox \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.572114 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f-shm.mount: Deactivated successfully. Jun 25 14:18:24.585046 containerd[1911]: time="2024-06-25T14:18:24.575380687Z" level=error msg="encountered an error cleaning up failed sandbox \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.585046 containerd[1911]: time="2024-06-25T14:18:24.575527516Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-47ngv,Uid:465b00ac-d2f9-4d4f-8724-a625ed37de19,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.585302 kubelet[3283]: E0625 14:18:24.575898 3283 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.585302 kubelet[3283]: E0625 14:18:24.575974 3283 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-47ngv" Jun 25 14:18:24.585302 kubelet[3283]: E0625 14:18:24.576012 3283 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-47ngv" Jun 25 14:18:24.585302 kubelet[3283]: E0625 14:18:24.576091 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-47ngv_kube-system(465b00ac-d2f9-4d4f-8724-a625ed37de19)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-47ngv_kube-system(465b00ac-d2f9-4d4f-8724-a625ed37de19)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-47ngv" podUID="465b00ac-d2f9-4d4f-8724-a625ed37de19" Jun 25 14:18:24.593000 audit[4075]: USER_ACCT pid=4075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.599160 sshd[4075]: Accepted publickey for core from 139.178.68.195 port 39962 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:24.599669 kernel: audit: type=1101 audit(1719325104.593:287): pid=4075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.599000 audit[4075]: CRED_ACQ pid=4075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.604059 sshd[4075]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:24.609426 kernel: audit: type=1103 audit(1719325104.599:288): pid=4075 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.609603 kernel: audit: type=1006 audit(1719325104.599:289): pid=4075 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jun 25 14:18:24.599000 audit[4075]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7cc2860 a2=3 a3=1 items=0 ppid=1 pid=4075 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:24.613994 containerd[1911]: time="2024-06-25T14:18:24.613293916Z" level=error msg="Failed to destroy network for sandbox \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.615162 kernel: audit: type=1300 audit(1719325104.599:289): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7cc2860 a2=3 a3=1 items=0 ppid=1 pid=4075 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:24.599000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:24.620540 kernel: audit: type=1327 audit(1719325104.599:289): proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:24.624712 systemd-logind[1895]: New session 9 of user core. Jun 25 14:18:24.628194 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 25 14:18:24.638732 containerd[1911]: time="2024-06-25T14:18:24.638584234Z" level=error msg="encountered an error cleaning up failed sandbox \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.641000 audit[4075]: USER_START pid=4075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.645000 audit[4147]: CRED_ACQ pid=4147 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.652544 kernel: audit: type=1105 audit(1719325104.641:290): pid=4075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.652818 kernel: audit: type=1103 audit(1719325104.645:291): pid=4147 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.653070 containerd[1911]: time="2024-06-25T14:18:24.652939017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rcct9,Uid:81a99eed-b323-4056-acb1-e2466297b4af,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.659607 kubelet[3283]: E0625 14:18:24.657418 3283 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.659607 kubelet[3283]: E0625 14:18:24.657532 3283 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-rcct9" Jun 25 14:18:24.659607 kubelet[3283]: E0625 14:18:24.657573 3283 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-rcct9" Jun 25 14:18:24.659607 kubelet[3283]: E0625 14:18:24.657683 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-rcct9_kube-system(81a99eed-b323-4056-acb1-e2466297b4af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-rcct9_kube-system(81a99eed-b323-4056-acb1-e2466297b4af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-rcct9" podUID="81a99eed-b323-4056-acb1-e2466297b4af" Jun 25 14:18:24.745849 containerd[1911]: time="2024-06-25T14:18:24.745757058Z" level=error msg="Failed to destroy network for sandbox \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.746560 containerd[1911]: time="2024-06-25T14:18:24.746422099Z" level=error msg="encountered an error cleaning up failed sandbox \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.746756 containerd[1911]: time="2024-06-25T14:18:24.746596192Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s85fn,Uid:cc7acd19-00be-407a-b3d7-2b1d30780fb3,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.747150 kubelet[3283]: E0625 14:18:24.747091 3283 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.747284 kubelet[3283]: E0625 14:18:24.747254 3283 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s85fn" Jun 25 14:18:24.747377 kubelet[3283]: E0625 14:18:24.747295 3283 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s85fn" Jun 25 14:18:24.747455 kubelet[3283]: E0625 14:18:24.747440 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s85fn_calico-system(cc7acd19-00be-407a-b3d7-2b1d30780fb3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s85fn_calico-system(cc7acd19-00be-407a-b3d7-2b1d30780fb3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s85fn" podUID="cc7acd19-00be-407a-b3d7-2b1d30780fb3" Jun 25 14:18:24.843181 kubelet[3283]: I0625 14:18:24.843018 3283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:18:24.846150 containerd[1911]: time="2024-06-25T14:18:24.844337174Z" level=info msg="StopPodSandbox for \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\"" Jun 25 14:18:24.848577 containerd[1911]: time="2024-06-25T14:18:24.848516812Z" level=info msg="Ensure that sandbox b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1 in task-service has been cleanup successfully" Jun 25 14:18:24.849292 kubelet[3283]: I0625 14:18:24.849239 3283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:18:24.853426 containerd[1911]: time="2024-06-25T14:18:24.850786490Z" level=info msg="StopPodSandbox for \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\"" Jun 25 14:18:24.853996 kubelet[3283]: I0625 14:18:24.853890 3283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:18:24.856678 containerd[1911]: time="2024-06-25T14:18:24.855288562Z" level=info msg="StopPodSandbox for \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\"" Jun 25 14:18:24.856678 containerd[1911]: time="2024-06-25T14:18:24.855678051Z" level=info msg="Ensure that sandbox 3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3 in task-service has been cleanup successfully" Jun 25 14:18:24.856678 containerd[1911]: time="2024-06-25T14:18:24.856202299Z" level=info msg="Ensure that sandbox d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f in task-service has been cleanup successfully" Jun 25 14:18:24.873760 kubelet[3283]: I0625 14:18:24.873461 3283 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:24.891129 containerd[1911]: time="2024-06-25T14:18:24.877333955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Jun 25 14:18:24.891802 containerd[1911]: time="2024-06-25T14:18:24.891737828Z" level=info msg="StopPodSandbox for \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\"" Jun 25 14:18:24.892118 containerd[1911]: time="2024-06-25T14:18:24.892069275Z" level=info msg="Ensure that sandbox 10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093 in task-service has been cleanup successfully" Jun 25 14:18:24.915728 sshd[4075]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:24.918000 audit[4075]: USER_END pid=4075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.923311 systemd-logind[1895]: Session 9 logged out. Waiting for processes to exit. Jun 25 14:18:24.919000 audit[4075]: CRED_DISP pid=4075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.931143 kernel: audit: type=1106 audit(1719325104.918:292): pid=4075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.936440 kernel: audit: type=1104 audit(1719325104.919:293): pid=4075 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:24.931000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-172.31.29.41:22-139.178.68.195:39962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:24.932010 systemd[1]: sshd@8-172.31.29.41:22-139.178.68.195:39962.service: Deactivated successfully. Jun 25 14:18:24.933949 systemd[1]: session-9.scope: Deactivated successfully. Jun 25 14:18:24.939856 systemd-logind[1895]: Removed session 9. Jun 25 14:18:24.996919 containerd[1911]: time="2024-06-25T14:18:24.996824415Z" level=error msg="StopPodSandbox for \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\" failed" error="failed to destroy network for sandbox \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:24.997473 kubelet[3283]: E0625 14:18:24.997413 3283 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:18:24.997607 kubelet[3283]: E0625 14:18:24.997522 3283 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f"} Jun 25 14:18:24.997607 kubelet[3283]: E0625 14:18:24.997599 3283 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"465b00ac-d2f9-4d4f-8724-a625ed37de19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:18:24.997897 kubelet[3283]: E0625 14:18:24.997673 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"465b00ac-d2f9-4d4f-8724-a625ed37de19\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-47ngv" podUID="465b00ac-d2f9-4d4f-8724-a625ed37de19" Jun 25 14:18:25.035229 containerd[1911]: time="2024-06-25T14:18:25.035146917Z" level=error msg="StopPodSandbox for \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\" failed" error="failed to destroy network for sandbox \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:25.035864 kubelet[3283]: E0625 14:18:25.035791 3283 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:25.035864 kubelet[3283]: E0625 14:18:25.035866 3283 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093"} Jun 25 14:18:25.036090 kubelet[3283]: E0625 14:18:25.035929 3283 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d76b6a7-2dd0-4867-abc7-c8bd529a7e66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:18:25.036090 kubelet[3283]: E0625 14:18:25.035986 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d76b6a7-2dd0-4867-abc7-c8bd529a7e66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-84fbd4855c-kghkg" podUID="4d76b6a7-2dd0-4867-abc7-c8bd529a7e66" Jun 25 14:18:25.052161 containerd[1911]: time="2024-06-25T14:18:25.052073985Z" level=error msg="StopPodSandbox for \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\" failed" error="failed to destroy network for sandbox \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:25.053011 kubelet[3283]: E0625 14:18:25.052688 3283 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:18:25.053011 kubelet[3283]: E0625 14:18:25.052770 3283 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3"} Jun 25 14:18:25.053011 kubelet[3283]: E0625 14:18:25.052879 3283 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc7acd19-00be-407a-b3d7-2b1d30780fb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:18:25.053011 kubelet[3283]: E0625 14:18:25.052976 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc7acd19-00be-407a-b3d7-2b1d30780fb3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s85fn" podUID="cc7acd19-00be-407a-b3d7-2b1d30780fb3" Jun 25 14:18:25.064643 containerd[1911]: time="2024-06-25T14:18:25.064547698Z" level=error msg="StopPodSandbox for \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\" failed" error="failed to destroy network for sandbox \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jun 25 14:18:25.065000 kubelet[3283]: E0625 14:18:25.064947 3283 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:18:25.065100 kubelet[3283]: E0625 14:18:25.065018 3283 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1"} Jun 25 14:18:25.065100 kubelet[3283]: E0625 14:18:25.065081 3283 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"81a99eed-b323-4056-acb1-e2466297b4af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jun 25 14:18:25.065293 kubelet[3283]: E0625 14:18:25.065136 3283 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"81a99eed-b323-4056-acb1-e2466297b4af\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-rcct9" podUID="81a99eed-b323-4056-acb1-e2466297b4af" Jun 25 14:18:25.509580 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3-shm.mount: Deactivated successfully. Jun 25 14:18:25.509910 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1-shm.mount: Deactivated successfully. Jun 25 14:18:29.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.41:22-139.178.68.195:41542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:29.946505 systemd[1]: Started sshd@9-172.31.29.41:22-139.178.68.195:41542.service - OpenSSH per-connection server daemon (139.178.68.195:41542). Jun 25 14:18:29.953277 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:18:29.953409 kernel: audit: type=1130 audit(1719325109.946:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.41:22-139.178.68.195:41542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:30.148000 audit[4270]: USER_ACCT pid=4270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.152542 sshd[4270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:30.148000 audit[4270]: CRED_ACQ pid=4270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.154577 sshd[4270]: Accepted publickey for core from 139.178.68.195 port 41542 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:30.158529 kernel: audit: type=1101 audit(1719325110.148:296): pid=4270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.158864 kernel: audit: type=1103 audit(1719325110.148:297): pid=4270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.162602 kernel: audit: type=1006 audit(1719325110.148:298): pid=4270 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jun 25 14:18:30.148000 audit[4270]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdc6c8b60 a2=3 a3=1 items=0 ppid=1 pid=4270 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:30.167872 kernel: audit: type=1300 audit(1719325110.148:298): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdc6c8b60 a2=3 a3=1 items=0 ppid=1 pid=4270 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:30.148000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:30.172928 kernel: audit: type=1327 audit(1719325110.148:298): proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:30.182089 systemd-logind[1895]: New session 10 of user core. Jun 25 14:18:30.189223 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 25 14:18:30.204000 audit[4270]: USER_START pid=4270 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.213708 kernel: audit: type=1105 audit(1719325110.204:299): pid=4270 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.213000 audit[4273]: CRED_ACQ pid=4273 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.220749 kernel: audit: type=1103 audit(1719325110.213:300): pid=4273 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.489459 sshd[4270]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:30.493000 audit[4270]: USER_END pid=4270 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.493000 audit[4270]: CRED_DISP pid=4270 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.496752 systemd[1]: sshd@9-172.31.29.41:22-139.178.68.195:41542.service: Deactivated successfully. Jun 25 14:18:30.498409 systemd[1]: session-10.scope: Deactivated successfully. Jun 25 14:18:30.503330 kernel: audit: type=1106 audit(1719325110.493:301): pid=4270 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.503594 kernel: audit: type=1104 audit(1719325110.493:302): pid=4270 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.502576 systemd-logind[1895]: Session 10 logged out. Waiting for processes to exit. Jun 25 14:18:30.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-172.31.29.41:22-139.178.68.195:41542 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:30.505362 systemd-logind[1895]: Removed session 10. Jun 25 14:18:30.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.41:22-139.178.68.195:41552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:30.520441 systemd[1]: Started sshd@10-172.31.29.41:22-139.178.68.195:41552.service - OpenSSH per-connection server daemon (139.178.68.195:41552). Jun 25 14:18:30.709000 audit[4283]: USER_ACCT pid=4283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.710877 sshd[4283]: Accepted publickey for core from 139.178.68.195 port 41552 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:30.712000 audit[4283]: CRED_ACQ pid=4283 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.712000 audit[4283]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffce76dd90 a2=3 a3=1 items=0 ppid=1 pid=4283 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:30.712000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:30.714687 sshd[4283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:30.725274 systemd-logind[1895]: New session 11 of user core. Jun 25 14:18:30.729203 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 25 14:18:30.742000 audit[4283]: USER_START pid=4283 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:30.746000 audit[4286]: CRED_ACQ pid=4286 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:31.534158 sshd[4283]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:31.543000 audit[4283]: USER_END pid=4283 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:31.543000 audit[4283]: CRED_DISP pid=4283 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:31.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-172.31.29.41:22-139.178.68.195:41552 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:31.547184 systemd[1]: sshd@10-172.31.29.41:22-139.178.68.195:41552.service: Deactivated successfully. Jun 25 14:18:31.549027 systemd[1]: session-11.scope: Deactivated successfully. Jun 25 14:18:31.553017 systemd-logind[1895]: Session 11 logged out. Waiting for processes to exit. Jun 25 14:18:31.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.41:22-139.178.68.195:41558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:31.573809 systemd[1]: Started sshd@11-172.31.29.41:22-139.178.68.195:41558.service - OpenSSH per-connection server daemon (139.178.68.195:41558). Jun 25 14:18:31.588012 systemd-logind[1895]: Removed session 11. Jun 25 14:18:31.786000 audit[4294]: USER_ACCT pid=4294 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:31.787947 sshd[4294]: Accepted publickey for core from 139.178.68.195 port 41558 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:31.790000 audit[4294]: CRED_ACQ pid=4294 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:31.791000 audit[4294]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5bf66d0 a2=3 a3=1 items=0 ppid=1 pid=4294 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:31.791000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:31.794458 sshd[4294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:31.801478 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2251664646.mount: Deactivated successfully. Jun 25 14:18:31.818481 systemd-logind[1895]: New session 12 of user core. Jun 25 14:18:31.824161 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 25 14:18:31.841000 audit[4294]: USER_START pid=4294 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:31.845000 audit[4297]: CRED_ACQ pid=4297 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:32.056540 containerd[1911]: time="2024-06-25T14:18:32.056356638Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:32.076349 containerd[1911]: time="2024-06-25T14:18:32.076269673Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=110491350" Jun 25 14:18:32.099763 containerd[1911]: time="2024-06-25T14:18:32.099693274Z" level=info msg="ImageCreate event name:\"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:32.118307 sshd[4294]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:32.120000 audit[4294]: USER_END pid=4294 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:32.120000 audit[4294]: CRED_DISP pid=4294 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:32.124459 systemd[1]: sshd@11-172.31.29.41:22-139.178.68.195:41558.service: Deactivated successfully. Jun 25 14:18:32.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-172.31.29.41:22-139.178.68.195:41558 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:32.127065 systemd[1]: session-12.scope: Deactivated successfully. Jun 25 14:18:32.127068 systemd-logind[1895]: Session 12 logged out. Waiting for processes to exit. Jun 25 14:18:32.129894 systemd-logind[1895]: Removed session 12. Jun 25 14:18:32.150848 containerd[1911]: time="2024-06-25T14:18:32.150773459Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:32.192556 containerd[1911]: time="2024-06-25T14:18:32.192472380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:32.195076 containerd[1911]: time="2024-06-25T14:18:32.195002536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"110491212\" in 7.317546206s" Jun 25 14:18:32.195337 containerd[1911]: time="2024-06-25T14:18:32.195281364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:d80cbd636ae2754a08d04558f0436508a17d92258e4712cc4a6299f43497607f\"" Jun 25 14:18:32.227777 containerd[1911]: time="2024-06-25T14:18:32.227692910Z" level=info msg="CreateContainer within sandbox \"47162c4697c227b87a6229e614223922badc2f6d10acb8db3079002c53dd8b25\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jun 25 14:18:32.258110 containerd[1911]: time="2024-06-25T14:18:32.257821497Z" level=info msg="CreateContainer within sandbox \"47162c4697c227b87a6229e614223922badc2f6d10acb8db3079002c53dd8b25\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c55db7cadd21dcf24ea2de6b35780262fa53023b55caf6c87b265a519e25ea96\"" Jun 25 14:18:32.264704 containerd[1911]: time="2024-06-25T14:18:32.261578036Z" level=info msg="StartContainer for \"c55db7cadd21dcf24ea2de6b35780262fa53023b55caf6c87b265a519e25ea96\"" Jun 25 14:18:32.262327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2164772363.mount: Deactivated successfully. Jun 25 14:18:32.374874 containerd[1911]: time="2024-06-25T14:18:32.374714627Z" level=info msg="StartContainer for \"c55db7cadd21dcf24ea2de6b35780262fa53023b55caf6c87b265a519e25ea96\" returns successfully" Jun 25 14:18:32.522276 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jun 25 14:18:32.522452 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jun 25 14:18:32.955287 kubelet[3283]: I0625 14:18:32.955235 3283 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-4cqxl" podStartSLOduration=2.025086556 podCreationTimestamp="2024-06-25 14:18:12 +0000 UTC" firstStartedPulling="2024-06-25 14:18:13.277314938 +0000 UTC m=+37.048402378" lastFinishedPulling="2024-06-25 14:18:32.195823793 +0000 UTC m=+55.966911233" observedRunningTime="2024-06-25 14:18:32.933371159 +0000 UTC m=+56.704458683" watchObservedRunningTime="2024-06-25 14:18:32.943595411 +0000 UTC m=+56.714682875" Jun 25 14:18:32.963499 systemd[1]: run-containerd-runc-k8s.io-c55db7cadd21dcf24ea2de6b35780262fa53023b55caf6c87b265a519e25ea96-runc.zCph3I.mount: Deactivated successfully. Jun 25 14:18:33.943594 systemd[1]: run-containerd-runc-k8s.io-c55db7cadd21dcf24ea2de6b35780262fa53023b55caf6c87b265a519e25ea96-runc.N7fRfB.mount: Deactivated successfully. Jun 25 14:18:34.261000 audit[4450]: AVC avc: denied { write } for pid=4450 comm="tee" name="fd" dev="proc" ino=23691 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:18:34.261000 audit[4450]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff4bf9a22 a2=241 a3=1b6 items=1 ppid=4425 pid=4450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:34.261000 audit: CWD cwd="/etc/service/enabled/felix/log" Jun 25 14:18:34.261000 audit: PATH item=0 name="/dev/fd/63" inode=23659 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:18:34.261000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:18:34.275000 audit[4468]: AVC avc: denied { write } for pid=4468 comm="tee" name="fd" dev="proc" ino=23700 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:18:34.275000 audit[4468]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc829aa24 a2=241 a3=1b6 items=1 ppid=4437 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:34.275000 audit: CWD cwd="/etc/service/enabled/cni/log" Jun 25 14:18:34.275000 audit: PATH item=0 name="/dev/fd/63" inode=23680 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:18:34.275000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:18:34.318000 audit[4474]: AVC avc: denied { write } for pid=4474 comm="tee" name="fd" dev="proc" ino=23713 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:18:34.318000 audit[4474]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffc129a23 a2=241 a3=1b6 items=1 ppid=4426 pid=4474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:34.318000 audit: CWD cwd="/etc/service/enabled/bird/log" Jun 25 14:18:34.318000 audit: PATH item=0 name="/dev/fd/63" inode=23697 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:18:34.318000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:18:34.321000 audit[4481]: AVC avc: denied { write } for pid=4481 comm="tee" name="fd" dev="proc" ino=23470 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:18:34.321000 audit[4481]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcadafa13 a2=241 a3=1b6 items=1 ppid=4431 pid=4481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:34.321000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Jun 25 14:18:34.321000 audit: PATH item=0 name="/dev/fd/63" inode=23462 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:18:34.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:18:34.330000 audit[4484]: AVC avc: denied { write } for pid=4484 comm="tee" name="fd" dev="proc" ino=23474 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:18:34.330000 audit[4484]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdef21a12 a2=241 a3=1b6 items=1 ppid=4435 pid=4484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:34.330000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Jun 25 14:18:34.330000 audit: PATH item=0 name="/dev/fd/63" inode=23465 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:18:34.330000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:18:34.341000 audit[4494]: AVC avc: denied { write } for pid=4494 comm="tee" name="fd" dev="proc" ino=23478 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:18:34.341000 audit[4494]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdd68da22 a2=241 a3=1b6 items=1 ppid=4438 pid=4494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:34.341000 audit: CWD cwd="/etc/service/enabled/bird6/log" Jun 25 14:18:34.341000 audit: PATH item=0 name="/dev/fd/63" inode=23715 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:18:34.341000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:18:34.352000 audit[4488]: AVC avc: denied { write } for pid=4488 comm="tee" name="fd" dev="proc" ino=23720 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Jun 25 14:18:34.352000 audit[4488]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffd3fbea22 a2=241 a3=1b6 items=1 ppid=4451 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:34.352000 audit: CWD cwd="/etc/service/enabled/confd/log" Jun 25 14:18:34.352000 audit: PATH item=0 name="/dev/fd/63" inode=23712 dev=00:0c mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jun 25 14:18:34.352000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Jun 25 14:18:34.959239 systemd-networkd[1599]: vxlan.calico: Link UP Jun 25 14:18:34.959259 systemd-networkd[1599]: vxlan.calico: Gained carrier Jun 25 14:18:34.960573 (udev-worker)[4351]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:18:35.017235 (udev-worker)[4350]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:18:35.027000 audit: BPF prog-id=10 op=LOAD Jun 25 14:18:35.035137 kernel: kauditd_printk_skb: 58 callbacks suppressed Jun 25 14:18:35.035266 kernel: audit: type=1334 audit(1719325115.027:329): prog-id=10 op=LOAD Jun 25 14:18:35.027000 audit[4564]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffed1316f8 a2=70 a3=ffffed131768 items=0 ppid=4427 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.027000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:18:35.049226 kernel: audit: type=1300 audit(1719325115.027:329): arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffed1316f8 a2=70 a3=ffffed131768 items=0 ppid=4427 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.049352 kernel: audit: type=1327 audit(1719325115.027:329): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:18:35.035000 audit: BPF prog-id=10 op=UNLOAD Jun 25 14:18:35.035000 audit: BPF prog-id=11 op=LOAD Jun 25 14:18:35.057653 kernel: audit: type=1334 audit(1719325115.035:330): prog-id=10 op=UNLOAD Jun 25 14:18:35.035000 audit[4564]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffed1316f8 a2=70 a3=4b243c items=0 ppid=4427 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.064755 kernel: audit: type=1334 audit(1719325115.035:331): prog-id=11 op=LOAD Jun 25 14:18:35.064900 kernel: audit: type=1300 audit(1719325115.035:331): arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffed1316f8 a2=70 a3=4b243c items=0 ppid=4427 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.035000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:18:35.069048 kernel: audit: type=1327 audit(1719325115.035:331): proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:18:35.041000 audit: BPF prog-id=11 op=UNLOAD Jun 25 14:18:35.070761 kernel: audit: type=1334 audit(1719325115.041:332): prog-id=11 op=UNLOAD Jun 25 14:18:35.070872 kernel: audit: type=1334 audit(1719325115.041:333): prog-id=12 op=LOAD Jun 25 14:18:35.041000 audit: BPF prog-id=12 op=LOAD Jun 25 14:18:35.041000 audit[4564]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffed131698 a2=70 a3=ffffed131708 items=0 ppid=4427 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.077415 kernel: audit: type=1300 audit(1719325115.041:333): arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffed131698 a2=70 a3=ffffed131708 items=0 ppid=4427 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.041000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:18:35.041000 audit: BPF prog-id=12 op=UNLOAD Jun 25 14:18:35.042000 audit: BPF prog-id=13 op=LOAD Jun 25 14:18:35.042000 audit[4564]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffed1316c8 a2=70 a3=149da449 items=0 ppid=4427 pid=4564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.042000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jun 25 14:18:35.079000 audit: BPF prog-id=13 op=UNLOAD Jun 25 14:18:35.079000 audit[4572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=32 a0=3 a1=ffffdc3384e0 a2=0 a3=ffff7fd8efa8 items=0 ppid=4427 pid=4572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip" exe="/usr/sbin/ip" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.079000 audit: PROCTITLE proctitle=6970006C696E6B0064656C0063616C69636F5F746D705F41 Jun 25 14:18:35.193000 audit[4591]: NETFILTER_CFG table=nat:97 family=2 entries=15 op=nft_register_chain pid=4591 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:35.193000 audit[4591]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd75bad20 a2=0 a3=ffffbe884fa8 items=0 ppid=4427 pid=4591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.193000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:35.194000 audit[4593]: NETFILTER_CFG table=mangle:98 family=2 entries=16 op=nft_register_chain pid=4593 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:35.194000 audit[4593]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=fffff6f89b30 a2=0 a3=ffffb20befa8 items=0 ppid=4427 pid=4593 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.194000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:35.199000 audit[4592]: NETFILTER_CFG table=raw:99 family=2 entries=19 op=nft_register_chain pid=4592 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:35.199000 audit[4592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6992 a0=3 a1=ffffe7b21d40 a2=0 a3=ffff89668fa8 items=0 ppid=4427 pid=4592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.199000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:35.203000 audit[4596]: NETFILTER_CFG table=filter:100 family=2 entries=39 op=nft_register_chain pid=4596 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:35.203000 audit[4596]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffc9680420 a2=0 a3=ffffa661dfa8 items=0 ppid=4427 pid=4596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:35.203000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:35.544089 containerd[1911]: time="2024-06-25T14:18:35.542691865Z" level=info msg="StopPodSandbox for \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\"" Jun 25 14:18:35.732744 containerd[1911]: 2024-06-25 14:18:35.667 [INFO][4618] k8s.go 608: Cleaning up netns ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:35.732744 containerd[1911]: 2024-06-25 14:18:35.668 [INFO][4618] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" iface="eth0" netns="/var/run/netns/cni-cec98887-dc7a-e494-fb76-7b2bfea933a3" Jun 25 14:18:35.732744 containerd[1911]: 2024-06-25 14:18:35.668 [INFO][4618] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" iface="eth0" netns="/var/run/netns/cni-cec98887-dc7a-e494-fb76-7b2bfea933a3" Jun 25 14:18:35.732744 containerd[1911]: 2024-06-25 14:18:35.668 [INFO][4618] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" iface="eth0" netns="/var/run/netns/cni-cec98887-dc7a-e494-fb76-7b2bfea933a3" Jun 25 14:18:35.732744 containerd[1911]: 2024-06-25 14:18:35.668 [INFO][4618] k8s.go 615: Releasing IP address(es) ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:35.732744 containerd[1911]: 2024-06-25 14:18:35.669 [INFO][4618] utils.go 188: Calico CNI releasing IP address ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:35.732744 containerd[1911]: 2024-06-25 14:18:35.712 [INFO][4626] ipam_plugin.go 411: Releasing address using handleID ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" HandleID="k8s-pod-network.10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Workload="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:35.732744 containerd[1911]: 2024-06-25 14:18:35.712 [INFO][4626] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:18:35.732744 containerd[1911]: 2024-06-25 14:18:35.712 [INFO][4626] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:18:35.732744 containerd[1911]: 2024-06-25 14:18:35.725 [WARNING][4626] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" HandleID="k8s-pod-network.10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Workload="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:35.732744 containerd[1911]: 2024-06-25 14:18:35.725 [INFO][4626] ipam_plugin.go 439: Releasing address using workloadID ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" HandleID="k8s-pod-network.10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Workload="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:35.732744 containerd[1911]: 2024-06-25 14:18:35.727 [INFO][4626] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:18:35.732744 containerd[1911]: 2024-06-25 14:18:35.730 [INFO][4618] k8s.go 621: Teardown processing complete. ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:35.740813 containerd[1911]: time="2024-06-25T14:18:35.737846870Z" level=info msg="TearDown network for sandbox \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\" successfully" Jun 25 14:18:35.740813 containerd[1911]: time="2024-06-25T14:18:35.737905946Z" level=info msg="StopPodSandbox for \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\" returns successfully" Jun 25 14:18:35.740813 containerd[1911]: time="2024-06-25T14:18:35.739854915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fbd4855c-kghkg,Uid:4d76b6a7-2dd0-4867-abc7-c8bd529a7e66,Namespace:calico-system,Attempt:1,}" Jun 25 14:18:35.737939 systemd[1]: run-netns-cni\x2dcec98887\x2ddc7a\x2de494\x2dfb76\x2d7b2bfea933a3.mount: Deactivated successfully. Jun 25 14:18:35.991112 (udev-worker)[4567]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:18:35.993271 systemd-networkd[1599]: cali4ed90d58d1d: Link UP Jun 25 14:18:35.997247 systemd-networkd[1599]: cali4ed90d58d1d: Gained carrier Jun 25 14:18:35.997737 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali4ed90d58d1d: link becomes ready Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.834 [INFO][4632] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0 calico-kube-controllers-84fbd4855c- calico-system 4d76b6a7-2dd0-4867-abc7-c8bd529a7e66 810 0 2024-06-25 14:18:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:84fbd4855c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-29-41 calico-kube-controllers-84fbd4855c-kghkg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4ed90d58d1d [] []}} ContainerID="b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" Namespace="calico-system" Pod="calico-kube-controllers-84fbd4855c-kghkg" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-" Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.835 [INFO][4632] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" Namespace="calico-system" Pod="calico-kube-controllers-84fbd4855c-kghkg" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.897 [INFO][4643] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" HandleID="k8s-pod-network.b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" Workload="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.917 [INFO][4643] ipam_plugin.go 264: Auto assigning IP ContainerID="b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" HandleID="k8s-pod-network.b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" Workload="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40000ce220), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-41", "pod":"calico-kube-controllers-84fbd4855c-kghkg", "timestamp":"2024-06-25 14:18:35.897445943 +0000 UTC"}, Hostname:"ip-172-31-29-41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.918 [INFO][4643] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.918 [INFO][4643] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.918 [INFO][4643] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-41' Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.921 [INFO][4643] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" host="ip-172-31-29-41" Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.928 [INFO][4643] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-41" Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.937 [INFO][4643] ipam.go 489: Trying affinity for 192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.940 [INFO][4643] ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.947 [INFO][4643] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.947 [INFO][4643] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" host="ip-172-31-29-41" Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.950 [INFO][4643] ipam.go 1685: Creating new handle: k8s-pod-network.b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.957 [INFO][4643] ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" host="ip-172-31-29-41" Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.973 [INFO][4643] ipam.go 1216: Successfully claimed IPs: [192.168.115.129/26] block=192.168.115.128/26 handle="k8s-pod-network.b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" host="ip-172-31-29-41" Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.975 [INFO][4643] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.129/26] handle="k8s-pod-network.b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" host="ip-172-31-29-41" Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.976 [INFO][4643] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:18:36.023587 containerd[1911]: 2024-06-25 14:18:35.976 [INFO][4643] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.115.129/26] IPv6=[] ContainerID="b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" HandleID="k8s-pod-network.b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" Workload="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:36.025387 containerd[1911]: 2024-06-25 14:18:35.982 [INFO][4632] k8s.go 386: Populated endpoint ContainerID="b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" Namespace="calico-system" Pod="calico-kube-controllers-84fbd4855c-kghkg" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0", GenerateName:"calico-kube-controllers-84fbd4855c-", Namespace:"calico-system", SelfLink:"", UID:"4d76b6a7-2dd0-4867-abc7-c8bd529a7e66", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fbd4855c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"", Pod:"calico-kube-controllers-84fbd4855c-kghkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ed90d58d1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:36.025387 containerd[1911]: 2024-06-25 14:18:35.983 [INFO][4632] k8s.go 387: Calico CNI using IPs: [192.168.115.129/32] ContainerID="b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" Namespace="calico-system" Pod="calico-kube-controllers-84fbd4855c-kghkg" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:36.025387 containerd[1911]: 2024-06-25 14:18:35.983 [INFO][4632] dataplane_linux.go 68: Setting the host side veth name to cali4ed90d58d1d ContainerID="b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" Namespace="calico-system" Pod="calico-kube-controllers-84fbd4855c-kghkg" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:36.025387 containerd[1911]: 2024-06-25 14:18:35.999 [INFO][4632] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" Namespace="calico-system" Pod="calico-kube-controllers-84fbd4855c-kghkg" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:36.025387 containerd[1911]: 2024-06-25 14:18:36.000 [INFO][4632] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" Namespace="calico-system" Pod="calico-kube-controllers-84fbd4855c-kghkg" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0", GenerateName:"calico-kube-controllers-84fbd4855c-", Namespace:"calico-system", SelfLink:"", UID:"4d76b6a7-2dd0-4867-abc7-c8bd529a7e66", ResourceVersion:"810", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fbd4855c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae", Pod:"calico-kube-controllers-84fbd4855c-kghkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ed90d58d1d", MAC:"12:f8:c7:1d:10:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:36.025387 containerd[1911]: 2024-06-25 14:18:36.016 [INFO][4632] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae" Namespace="calico-system" Pod="calico-kube-controllers-84fbd4855c-kghkg" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:36.050000 audit[4658]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=4658 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:36.050000 audit[4658]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=fffff2524900 a2=0 a3=ffff88ea0fa8 items=0 ppid=4427 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:36.050000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:36.104462 containerd[1911]: time="2024-06-25T14:18:36.101264789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:18:36.104462 containerd[1911]: time="2024-06-25T14:18:36.101371984Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:36.104462 containerd[1911]: time="2024-06-25T14:18:36.101434707Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:18:36.104462 containerd[1911]: time="2024-06-25T14:18:36.101471679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:36.285077 containerd[1911]: time="2024-06-25T14:18:36.284859092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-84fbd4855c-kghkg,Uid:4d76b6a7-2dd0-4867-abc7-c8bd529a7e66,Namespace:calico-system,Attempt:1,} returns sandbox id \"b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae\"" Jun 25 14:18:36.288553 containerd[1911]: time="2024-06-25T14:18:36.288493539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Jun 25 14:18:36.600825 containerd[1911]: time="2024-06-25T14:18:36.600646338Z" level=info msg="StopPodSandbox for \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\"" Jun 25 14:18:36.741472 systemd[1]: run-containerd-runc-k8s.io-b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae-runc.j37X3c.mount: Deactivated successfully. Jun 25 14:18:36.757995 containerd[1911]: 2024-06-25 14:18:36.678 [WARNING][4727] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0", GenerateName:"calico-kube-controllers-84fbd4855c-", Namespace:"calico-system", SelfLink:"", UID:"4d76b6a7-2dd0-4867-abc7-c8bd529a7e66", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fbd4855c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae", Pod:"calico-kube-controllers-84fbd4855c-kghkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ed90d58d1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:36.757995 containerd[1911]: 2024-06-25 14:18:36.678 [INFO][4727] k8s.go 608: Cleaning up netns ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:36.757995 containerd[1911]: 2024-06-25 14:18:36.678 [INFO][4727] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" iface="eth0" netns="" Jun 25 14:18:36.757995 containerd[1911]: 2024-06-25 14:18:36.678 [INFO][4727] k8s.go 615: Releasing IP address(es) ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:36.757995 containerd[1911]: 2024-06-25 14:18:36.679 [INFO][4727] utils.go 188: Calico CNI releasing IP address ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:36.757995 containerd[1911]: 2024-06-25 14:18:36.730 [INFO][4733] ipam_plugin.go 411: Releasing address using handleID ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" HandleID="k8s-pod-network.10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Workload="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:36.757995 containerd[1911]: 2024-06-25 14:18:36.730 [INFO][4733] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:18:36.757995 containerd[1911]: 2024-06-25 14:18:36.730 [INFO][4733] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:18:36.757995 containerd[1911]: 2024-06-25 14:18:36.748 [WARNING][4733] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" HandleID="k8s-pod-network.10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Workload="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:36.757995 containerd[1911]: 2024-06-25 14:18:36.748 [INFO][4733] ipam_plugin.go 439: Releasing address using workloadID ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" HandleID="k8s-pod-network.10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Workload="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:36.757995 containerd[1911]: 2024-06-25 14:18:36.751 [INFO][4733] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:18:36.757995 containerd[1911]: 2024-06-25 14:18:36.754 [INFO][4727] k8s.go 621: Teardown processing complete. ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:36.759340 containerd[1911]: time="2024-06-25T14:18:36.758037881Z" level=info msg="TearDown network for sandbox \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\" successfully" Jun 25 14:18:36.759340 containerd[1911]: time="2024-06-25T14:18:36.758090680Z" level=info msg="StopPodSandbox for \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\" returns successfully" Jun 25 14:18:36.759773 containerd[1911]: time="2024-06-25T14:18:36.759483265Z" level=info msg="RemovePodSandbox for \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\"" Jun 25 14:18:36.759934 containerd[1911]: time="2024-06-25T14:18:36.759789285Z" level=info msg="Forcibly stopping sandbox \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\"" Jun 25 14:18:36.900515 containerd[1911]: 2024-06-25 14:18:36.838 [WARNING][4752] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0", GenerateName:"calico-kube-controllers-84fbd4855c-", Namespace:"calico-system", SelfLink:"", UID:"4d76b6a7-2dd0-4867-abc7-c8bd529a7e66", ResourceVersion:"816", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"84fbd4855c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae", Pod:"calico-kube-controllers-84fbd4855c-kghkg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.115.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4ed90d58d1d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:36.900515 containerd[1911]: 2024-06-25 14:18:36.838 [INFO][4752] k8s.go 608: Cleaning up netns ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:36.900515 containerd[1911]: 2024-06-25 14:18:36.838 [INFO][4752] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" iface="eth0" netns="" Jun 25 14:18:36.900515 containerd[1911]: 2024-06-25 14:18:36.839 [INFO][4752] k8s.go 615: Releasing IP address(es) ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:36.900515 containerd[1911]: 2024-06-25 14:18:36.839 [INFO][4752] utils.go 188: Calico CNI releasing IP address ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:36.900515 containerd[1911]: 2024-06-25 14:18:36.878 [INFO][4758] ipam_plugin.go 411: Releasing address using handleID ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" HandleID="k8s-pod-network.10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Workload="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:36.900515 containerd[1911]: 2024-06-25 14:18:36.880 [INFO][4758] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:18:36.900515 containerd[1911]: 2024-06-25 14:18:36.880 [INFO][4758] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:18:36.900515 containerd[1911]: 2024-06-25 14:18:36.893 [WARNING][4758] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" HandleID="k8s-pod-network.10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Workload="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:36.900515 containerd[1911]: 2024-06-25 14:18:36.893 [INFO][4758] ipam_plugin.go 439: Releasing address using workloadID ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" HandleID="k8s-pod-network.10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Workload="ip--172--31--29--41-k8s-calico--kube--controllers--84fbd4855c--kghkg-eth0" Jun 25 14:18:36.900515 containerd[1911]: 2024-06-25 14:18:36.895 [INFO][4758] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:18:36.900515 containerd[1911]: 2024-06-25 14:18:36.898 [INFO][4752] k8s.go 621: Teardown processing complete. ContainerID="10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093" Jun 25 14:18:36.901445 containerd[1911]: time="2024-06-25T14:18:36.900480857Z" level=info msg="TearDown network for sandbox \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\" successfully" Jun 25 14:18:36.906928 containerd[1911]: time="2024-06-25T14:18:36.906824501Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:18:36.907275 containerd[1911]: time="2024-06-25T14:18:36.907232928Z" level=info msg="RemovePodSandbox \"10a60fc19026cb19fa938781eb58041725b4b1a85c41f634915587b1a97fd093\" returns successfully" Jun 25 14:18:36.998969 systemd-networkd[1599]: vxlan.calico: Gained IPv6LL Jun 25 14:18:37.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.41:22-139.178.68.195:41562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:37.147343 systemd[1]: Started sshd@12-172.31.29.41:22-139.178.68.195:41562.service - OpenSSH per-connection server daemon (139.178.68.195:41562). Jun 25 14:18:37.326000 audit[4765]: USER_ACCT pid=4765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:37.329526 sshd[4765]: Accepted publickey for core from 139.178.68.195 port 41562 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:37.328000 audit[4765]: CRED_ACQ pid=4765 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:37.328000 audit[4765]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd328d1b0 a2=3 a3=1 items=0 ppid=1 pid=4765 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:37.328000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:37.331710 sshd[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:37.342439 systemd-logind[1895]: New session 13 of user core. Jun 25 14:18:37.347234 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 25 14:18:37.356000 audit[4765]: USER_START pid=4765 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:37.359000 audit[4768]: CRED_ACQ pid=4768 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:37.454044 systemd-networkd[1599]: cali4ed90d58d1d: Gained IPv6LL Jun 25 14:18:37.546685 containerd[1911]: time="2024-06-25T14:18:37.545022918Z" level=info msg="StopPodSandbox for \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\"" Jun 25 14:18:37.693954 sshd[4765]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:37.698000 audit[4765]: USER_END pid=4765 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:37.699000 audit[4765]: CRED_DISP pid=4765 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:37.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-172.31.29.41:22-139.178.68.195:41562 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:37.706287 systemd[1]: sshd@12-172.31.29.41:22-139.178.68.195:41562.service: Deactivated successfully. Jun 25 14:18:37.711079 systemd[1]: session-13.scope: Deactivated successfully. Jun 25 14:18:37.720249 systemd-logind[1895]: Session 13 logged out. Waiting for processes to exit. Jun 25 14:18:37.722453 systemd-logind[1895]: Removed session 13. Jun 25 14:18:37.907134 containerd[1911]: 2024-06-25 14:18:37.752 [INFO][4790] k8s.go 608: Cleaning up netns ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:18:37.907134 containerd[1911]: 2024-06-25 14:18:37.753 [INFO][4790] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" iface="eth0" netns="/var/run/netns/cni-a9c0bc1e-3652-5f87-626b-caf5ca15eb5c" Jun 25 14:18:37.907134 containerd[1911]: 2024-06-25 14:18:37.755 [INFO][4790] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" iface="eth0" netns="/var/run/netns/cni-a9c0bc1e-3652-5f87-626b-caf5ca15eb5c" Jun 25 14:18:37.907134 containerd[1911]: 2024-06-25 14:18:37.755 [INFO][4790] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" iface="eth0" netns="/var/run/netns/cni-a9c0bc1e-3652-5f87-626b-caf5ca15eb5c" Jun 25 14:18:37.907134 containerd[1911]: 2024-06-25 14:18:37.755 [INFO][4790] k8s.go 615: Releasing IP address(es) ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:18:37.907134 containerd[1911]: 2024-06-25 14:18:37.756 [INFO][4790] utils.go 188: Calico CNI releasing IP address ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:18:37.907134 containerd[1911]: 2024-06-25 14:18:37.848 [INFO][4800] ipam_plugin.go 411: Releasing address using handleID ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" HandleID="k8s-pod-network.3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Workload="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:18:37.907134 containerd[1911]: 2024-06-25 14:18:37.849 [INFO][4800] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:18:37.907134 containerd[1911]: 2024-06-25 14:18:37.849 [INFO][4800] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:18:37.907134 containerd[1911]: 2024-06-25 14:18:37.880 [WARNING][4800] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" HandleID="k8s-pod-network.3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Workload="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:18:37.907134 containerd[1911]: 2024-06-25 14:18:37.880 [INFO][4800] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" HandleID="k8s-pod-network.3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Workload="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:18:37.907134 containerd[1911]: 2024-06-25 14:18:37.887 [INFO][4800] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:18:37.907134 containerd[1911]: 2024-06-25 14:18:37.902 [INFO][4790] k8s.go 621: Teardown processing complete. ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:18:37.916916 containerd[1911]: time="2024-06-25T14:18:37.915455609Z" level=info msg="TearDown network for sandbox \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\" successfully" Jun 25 14:18:37.916916 containerd[1911]: time="2024-06-25T14:18:37.915520001Z" level=info msg="StopPodSandbox for \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\" returns successfully" Jun 25 14:18:37.916916 containerd[1911]: time="2024-06-25T14:18:37.916873226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s85fn,Uid:cc7acd19-00be-407a-b3d7-2b1d30780fb3,Namespace:calico-system,Attempt:1,}" Jun 25 14:18:37.912106 systemd[1]: run-netns-cni\x2da9c0bc1e\x2d3652\x2d5f87\x2d626b\x2dcaf5ca15eb5c.mount: Deactivated successfully. Jun 25 14:18:38.277409 systemd-networkd[1599]: cali392401c060f: Link UP Jun 25 14:18:38.284394 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:18:38.284563 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali392401c060f: link becomes ready Jun 25 14:18:38.285045 systemd-networkd[1599]: cali392401c060f: Gained carrier Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.105 [INFO][4811] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0 csi-node-driver- calico-system cc7acd19-00be-407a-b3d7-2b1d30780fb3 831 0 2024-06-25 14:18:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ip-172-31-29-41 csi-node-driver-s85fn eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali392401c060f [] []}} ContainerID="485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" Namespace="calico-system" Pod="csi-node-driver-s85fn" WorkloadEndpoint="ip--172--31--29--41-k8s-csi--node--driver--s85fn-" Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.106 [INFO][4811] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" Namespace="calico-system" Pod="csi-node-driver-s85fn" WorkloadEndpoint="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.167 [INFO][4822] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" HandleID="k8s-pod-network.485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" Workload="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.198 [INFO][4822] ipam_plugin.go 264: Auto assigning IP ContainerID="485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" HandleID="k8s-pod-network.485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" Workload="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000345270), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-29-41", "pod":"csi-node-driver-s85fn", "timestamp":"2024-06-25 14:18:38.167501385 +0000 UTC"}, Hostname:"ip-172-31-29-41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.198 [INFO][4822] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.198 [INFO][4822] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.198 [INFO][4822] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-41' Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.202 [INFO][4822] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" host="ip-172-31-29-41" Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.220 [INFO][4822] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-41" Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.231 [INFO][4822] ipam.go 489: Trying affinity for 192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.235 [INFO][4822] ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.242 [INFO][4822] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.243 [INFO][4822] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" host="ip-172-31-29-41" Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.247 [INFO][4822] ipam.go 1685: Creating new handle: k8s-pod-network.485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2 Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.255 [INFO][4822] ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" host="ip-172-31-29-41" Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.265 [INFO][4822] ipam.go 1216: Successfully claimed IPs: [192.168.115.130/26] block=192.168.115.128/26 handle="k8s-pod-network.485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" host="ip-172-31-29-41" Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.266 [INFO][4822] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.130/26] handle="k8s-pod-network.485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" host="ip-172-31-29-41" Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.266 [INFO][4822] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:18:38.340509 containerd[1911]: 2024-06-25 14:18:38.266 [INFO][4822] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.115.130/26] IPv6=[] ContainerID="485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" HandleID="k8s-pod-network.485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" Workload="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:18:38.342031 containerd[1911]: 2024-06-25 14:18:38.269 [INFO][4811] k8s.go 386: Populated endpoint ContainerID="485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" Namespace="calico-system" Pod="csi-node-driver-s85fn" WorkloadEndpoint="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc7acd19-00be-407a-b3d7-2b1d30780fb3", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"", Pod:"csi-node-driver-s85fn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali392401c060f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:38.342031 containerd[1911]: 2024-06-25 14:18:38.269 [INFO][4811] k8s.go 387: Calico CNI using IPs: [192.168.115.130/32] ContainerID="485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" Namespace="calico-system" Pod="csi-node-driver-s85fn" WorkloadEndpoint="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:18:38.342031 containerd[1911]: 2024-06-25 14:18:38.269 [INFO][4811] dataplane_linux.go 68: Setting the host side veth name to cali392401c060f ContainerID="485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" Namespace="calico-system" Pod="csi-node-driver-s85fn" WorkloadEndpoint="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:18:38.342031 containerd[1911]: 2024-06-25 14:18:38.292 [INFO][4811] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" Namespace="calico-system" Pod="csi-node-driver-s85fn" WorkloadEndpoint="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:18:38.342031 containerd[1911]: 2024-06-25 14:18:38.293 [INFO][4811] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" Namespace="calico-system" Pod="csi-node-driver-s85fn" WorkloadEndpoint="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc7acd19-00be-407a-b3d7-2b1d30780fb3", ResourceVersion:"831", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2", Pod:"csi-node-driver-s85fn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali392401c060f", MAC:"82:b1:e6:c0:e1:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:38.342031 containerd[1911]: 2024-06-25 14:18:38.315 [INFO][4811] k8s.go 500: Wrote updated endpoint to datastore ContainerID="485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2" Namespace="calico-system" Pod="csi-node-driver-s85fn" WorkloadEndpoint="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:18:38.349000 audit[4836]: NETFILTER_CFG table=filter:102 family=2 entries=34 op=nft_register_chain pid=4836 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:38.349000 audit[4836]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18640 a0=3 a1=ffffe99f3f90 a2=0 a3=ffff8be80fa8 items=0 ppid=4427 pid=4836 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:38.349000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:38.423985 containerd[1911]: time="2024-06-25T14:18:38.423856239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:18:38.424176 containerd[1911]: time="2024-06-25T14:18:38.424024034Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:38.424176 containerd[1911]: time="2024-06-25T14:18:38.424093765Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:18:38.424338 containerd[1911]: time="2024-06-25T14:18:38.424197900Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:38.583042 containerd[1911]: time="2024-06-25T14:18:38.582979000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s85fn,Uid:cc7acd19-00be-407a-b3d7-2b1d30780fb3,Namespace:calico-system,Attempt:1,} returns sandbox id \"485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2\"" Jun 25 14:18:39.097505 containerd[1911]: time="2024-06-25T14:18:39.097429467Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:39.099143 containerd[1911]: time="2024-06-25T14:18:39.099076474Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=31361057" Jun 25 14:18:39.099669 containerd[1911]: time="2024-06-25T14:18:39.099548717Z" level=info msg="ImageCreate event name:\"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:39.102674 containerd[1911]: time="2024-06-25T14:18:39.102538499Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:39.105762 containerd[1911]: time="2024-06-25T14:18:39.105699794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:39.107815 containerd[1911]: time="2024-06-25T14:18:39.107744453Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"32727593\" in 2.818569406s" Jun 25 14:18:39.107943 containerd[1911]: time="2024-06-25T14:18:39.107813165Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:89df47edb6965978d3683de1cac38ee5b47d7054332bbea7cc0ef3b3c17da2e1\"" Jun 25 14:18:39.109563 containerd[1911]: time="2024-06-25T14:18:39.109508639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Jun 25 14:18:39.143326 containerd[1911]: time="2024-06-25T14:18:39.143249907Z" level=info msg="CreateContainer within sandbox \"b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jun 25 14:18:39.167587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3445549000.mount: Deactivated successfully. Jun 25 14:18:39.170557 containerd[1911]: time="2024-06-25T14:18:39.170408030Z" level=info msg="CreateContainer within sandbox \"b9e2a78b593fb53f36e83c308868b2692f4dfb3d381f3dd7d6b4e64806dfb2ae\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c5d2922002f2a6888eb50a331ed3b7e13cd79bb7bd7475d1dcd8adf5ab3a8ae8\"" Jun 25 14:18:39.180672 containerd[1911]: time="2024-06-25T14:18:39.178183231Z" level=info msg="StartContainer for \"c5d2922002f2a6888eb50a331ed3b7e13cd79bb7bd7475d1dcd8adf5ab3a8ae8\"" Jun 25 14:18:39.333082 containerd[1911]: time="2024-06-25T14:18:39.333019222Z" level=info msg="StartContainer for \"c5d2922002f2a6888eb50a331ed3b7e13cd79bb7bd7475d1dcd8adf5ab3a8ae8\" returns successfully" Jun 25 14:18:39.544271 containerd[1911]: time="2024-06-25T14:18:39.544188219Z" level=info msg="StopPodSandbox for \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\"" Jun 25 14:18:39.547924 containerd[1911]: time="2024-06-25T14:18:39.547858574Z" level=info msg="StopPodSandbox for \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\"" Jun 25 14:18:39.858338 containerd[1911]: 2024-06-25 14:18:39.678 [INFO][4948] k8s.go 608: Cleaning up netns ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:18:39.858338 containerd[1911]: 2024-06-25 14:18:39.678 [INFO][4948] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" iface="eth0" netns="/var/run/netns/cni-50109f9d-4f20-1531-45d2-3998aeca7512" Jun 25 14:18:39.858338 containerd[1911]: 2024-06-25 14:18:39.684 [INFO][4948] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" iface="eth0" netns="/var/run/netns/cni-50109f9d-4f20-1531-45d2-3998aeca7512" Jun 25 14:18:39.858338 containerd[1911]: 2024-06-25 14:18:39.685 [INFO][4948] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" iface="eth0" netns="/var/run/netns/cni-50109f9d-4f20-1531-45d2-3998aeca7512" Jun 25 14:18:39.858338 containerd[1911]: 2024-06-25 14:18:39.685 [INFO][4948] k8s.go 615: Releasing IP address(es) ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:18:39.858338 containerd[1911]: 2024-06-25 14:18:39.685 [INFO][4948] utils.go 188: Calico CNI releasing IP address ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:18:39.858338 containerd[1911]: 2024-06-25 14:18:39.805 [INFO][4963] ipam_plugin.go 411: Releasing address using handleID ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" HandleID="k8s-pod-network.d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:18:39.858338 containerd[1911]: 2024-06-25 14:18:39.808 [INFO][4963] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:18:39.858338 containerd[1911]: 2024-06-25 14:18:39.808 [INFO][4963] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:18:39.858338 containerd[1911]: 2024-06-25 14:18:39.830 [WARNING][4963] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" HandleID="k8s-pod-network.d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:18:39.858338 containerd[1911]: 2024-06-25 14:18:39.830 [INFO][4963] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" HandleID="k8s-pod-network.d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:18:39.858338 containerd[1911]: 2024-06-25 14:18:39.834 [INFO][4963] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:18:39.858338 containerd[1911]: 2024-06-25 14:18:39.842 [INFO][4948] k8s.go 621: Teardown processing complete. ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:18:39.859606 containerd[1911]: time="2024-06-25T14:18:39.858291338Z" level=info msg="TearDown network for sandbox \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\" successfully" Jun 25 14:18:39.859606 containerd[1911]: time="2024-06-25T14:18:39.858390325Z" level=info msg="StopPodSandbox for \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\" returns successfully" Jun 25 14:18:39.876398 containerd[1911]: time="2024-06-25T14:18:39.876306130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-47ngv,Uid:465b00ac-d2f9-4d4f-8724-a625ed37de19,Namespace:kube-system,Attempt:1,}" Jun 25 14:18:39.979487 kubelet[3283]: I0625 14:18:39.977396 3283 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-84fbd4855c-kghkg" podStartSLOduration=25.156654968 podCreationTimestamp="2024-06-25 14:18:12 +0000 UTC" firstStartedPulling="2024-06-25 14:18:36.287514398 +0000 UTC m=+60.058601838" lastFinishedPulling="2024-06-25 14:18:39.108196177 +0000 UTC m=+62.879283617" observedRunningTime="2024-06-25 14:18:39.975187773 +0000 UTC m=+63.746275249" watchObservedRunningTime="2024-06-25 14:18:39.977336747 +0000 UTC m=+63.748424199" Jun 25 14:18:39.989461 containerd[1911]: 2024-06-25 14:18:39.787 [INFO][4957] k8s.go 608: Cleaning up netns ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:18:39.989461 containerd[1911]: 2024-06-25 14:18:39.787 [INFO][4957] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" iface="eth0" netns="/var/run/netns/cni-5f5d96de-e409-fefb-bd51-ef35705f559c" Jun 25 14:18:39.989461 containerd[1911]: 2024-06-25 14:18:39.787 [INFO][4957] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" iface="eth0" netns="/var/run/netns/cni-5f5d96de-e409-fefb-bd51-ef35705f559c" Jun 25 14:18:39.989461 containerd[1911]: 2024-06-25 14:18:39.787 [INFO][4957] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" iface="eth0" netns="/var/run/netns/cni-5f5d96de-e409-fefb-bd51-ef35705f559c" Jun 25 14:18:39.989461 containerd[1911]: 2024-06-25 14:18:39.788 [INFO][4957] k8s.go 615: Releasing IP address(es) ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:18:39.989461 containerd[1911]: 2024-06-25 14:18:39.788 [INFO][4957] utils.go 188: Calico CNI releasing IP address ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:18:39.989461 containerd[1911]: 2024-06-25 14:18:39.911 [INFO][4968] ipam_plugin.go 411: Releasing address using handleID ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" HandleID="k8s-pod-network.b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:18:39.989461 containerd[1911]: 2024-06-25 14:18:39.912 [INFO][4968] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:18:39.989461 containerd[1911]: 2024-06-25 14:18:39.912 [INFO][4968] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:18:39.989461 containerd[1911]: 2024-06-25 14:18:39.939 [WARNING][4968] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" HandleID="k8s-pod-network.b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:18:39.989461 containerd[1911]: 2024-06-25 14:18:39.939 [INFO][4968] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" HandleID="k8s-pod-network.b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:18:39.989461 containerd[1911]: 2024-06-25 14:18:39.948 [INFO][4968] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:18:39.989461 containerd[1911]: 2024-06-25 14:18:39.974 [INFO][4957] k8s.go 621: Teardown processing complete. ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:18:39.995964 containerd[1911]: time="2024-06-25T14:18:39.995878490Z" level=info msg="TearDown network for sandbox \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\" successfully" Jun 25 14:18:39.996184 containerd[1911]: time="2024-06-25T14:18:39.996147251Z" level=info msg="StopPodSandbox for \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\" returns successfully" Jun 25 14:18:39.997894 containerd[1911]: time="2024-06-25T14:18:39.997826022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rcct9,Uid:81a99eed-b323-4056-acb1-e2466297b4af,Namespace:kube-system,Attempt:1,}" Jun 25 14:18:40.127419 systemd[1]: run-netns-cni\x2d5f5d96de\x2de409\x2dfefb\x2dbd51\x2def35705f559c.mount: Deactivated successfully. Jun 25 14:18:40.129772 systemd[1]: run-netns-cni\x2d50109f9d\x2d4f20\x2d1531\x2d45d2\x2d3998aeca7512.mount: Deactivated successfully. Jun 25 14:18:40.328462 systemd-networkd[1599]: cali392401c060f: Gained IPv6LL Jun 25 14:18:40.613079 systemd-networkd[1599]: cali1cbb04f17a5: Link UP Jun 25 14:18:40.617847 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:18:40.617999 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali1cbb04f17a5: link becomes ready Jun 25 14:18:40.621794 systemd-networkd[1599]: cali1cbb04f17a5: Gained carrier Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.230 [INFO][4976] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0 coredns-5dd5756b68- kube-system 465b00ac-d2f9-4d4f-8724-a625ed37de19 849 0 2024-06-25 14:17:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-41 coredns-5dd5756b68-47ngv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1cbb04f17a5 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" Namespace="kube-system" Pod="coredns-5dd5756b68-47ngv" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-" Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.231 [INFO][4976] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" Namespace="kube-system" Pod="coredns-5dd5756b68-47ngv" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.467 [INFO][5021] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" HandleID="k8s-pod-network.125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.513 [INFO][5021] ipam_plugin.go 264: Auto assigning IP ContainerID="125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" HandleID="k8s-pod-network.125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000178710), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-41", "pod":"coredns-5dd5756b68-47ngv", "timestamp":"2024-06-25 14:18:40.467728221 +0000 UTC"}, Hostname:"ip-172-31-29-41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.513 [INFO][5021] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.513 [INFO][5021] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.513 [INFO][5021] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-41' Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.519 [INFO][5021] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" host="ip-172-31-29-41" Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.533 [INFO][5021] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-41" Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.546 [INFO][5021] ipam.go 489: Trying affinity for 192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.558 [INFO][5021] ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.566 [INFO][5021] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.566 [INFO][5021] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" host="ip-172-31-29-41" Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.570 [INFO][5021] ipam.go 1685: Creating new handle: k8s-pod-network.125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.578 [INFO][5021] ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" host="ip-172-31-29-41" Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.593 [INFO][5021] ipam.go 1216: Successfully claimed IPs: [192.168.115.131/26] block=192.168.115.128/26 handle="k8s-pod-network.125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" host="ip-172-31-29-41" Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.593 [INFO][5021] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.131/26] handle="k8s-pod-network.125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" host="ip-172-31-29-41" Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.593 [INFO][5021] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:18:40.674742 containerd[1911]: 2024-06-25 14:18:40.593 [INFO][5021] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.115.131/26] IPv6=[] ContainerID="125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" HandleID="k8s-pod-network.125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:18:40.679344 containerd[1911]: 2024-06-25 14:18:40.598 [INFO][4976] k8s.go 386: Populated endpoint ContainerID="125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" Namespace="kube-system" Pod="coredns-5dd5756b68-47ngv" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"465b00ac-d2f9-4d4f-8724-a625ed37de19", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"", Pod:"coredns-5dd5756b68-47ngv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cbb04f17a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:40.679344 containerd[1911]: 2024-06-25 14:18:40.598 [INFO][4976] k8s.go 387: Calico CNI using IPs: [192.168.115.131/32] ContainerID="125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" Namespace="kube-system" Pod="coredns-5dd5756b68-47ngv" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:18:40.679344 containerd[1911]: 2024-06-25 14:18:40.598 [INFO][4976] dataplane_linux.go 68: Setting the host side veth name to cali1cbb04f17a5 ContainerID="125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" Namespace="kube-system" Pod="coredns-5dd5756b68-47ngv" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:18:40.679344 containerd[1911]: 2024-06-25 14:18:40.618 [INFO][4976] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" Namespace="kube-system" Pod="coredns-5dd5756b68-47ngv" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:18:40.679344 containerd[1911]: 2024-06-25 14:18:40.624 [INFO][4976] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" Namespace="kube-system" Pod="coredns-5dd5756b68-47ngv" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"465b00ac-d2f9-4d4f-8724-a625ed37de19", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c", Pod:"coredns-5dd5756b68-47ngv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cbb04f17a5", MAC:"8a:ff:67:de:f4:d4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:40.679344 containerd[1911]: 2024-06-25 14:18:40.665 [INFO][4976] k8s.go 500: Wrote updated endpoint to datastore ContainerID="125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c" Namespace="kube-system" Pod="coredns-5dd5756b68-47ngv" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:18:40.751944 systemd-networkd[1599]: calibd480aab9c4: Link UP Jun 25 14:18:40.767657 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibd480aab9c4: link becomes ready Jun 25 14:18:40.768113 systemd-networkd[1599]: calibd480aab9c4: Gained carrier Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.358 [INFO][4994] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0 coredns-5dd5756b68- kube-system 81a99eed-b323-4056-acb1-e2466297b4af 850 0 2024-06-25 14:17:50 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-29-41 coredns-5dd5756b68-rcct9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibd480aab9c4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" Namespace="kube-system" Pod="coredns-5dd5756b68-rcct9" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-" Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.358 [INFO][4994] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" Namespace="kube-system" Pod="coredns-5dd5756b68-rcct9" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.504 [INFO][5030] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" HandleID="k8s-pod-network.576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.528 [INFO][5030] ipam_plugin.go 264: Auto assigning IP ContainerID="576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" HandleID="k8s-pod-network.576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002822d0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-29-41", "pod":"coredns-5dd5756b68-rcct9", "timestamp":"2024-06-25 14:18:40.504443371 +0000 UTC"}, Hostname:"ip-172-31-29-41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.528 [INFO][5030] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.595 [INFO][5030] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.597 [INFO][5030] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-41' Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.601 [INFO][5030] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" host="ip-172-31-29-41" Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.610 [INFO][5030] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-41" Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.648 [INFO][5030] ipam.go 489: Trying affinity for 192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.657 [INFO][5030] ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.668 [INFO][5030] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.668 [INFO][5030] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" host="ip-172-31-29-41" Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.680 [INFO][5030] ipam.go 1685: Creating new handle: k8s-pod-network.576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.691 [INFO][5030] ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" host="ip-172-31-29-41" Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.734 [INFO][5030] ipam.go 1216: Successfully claimed IPs: [192.168.115.132/26] block=192.168.115.128/26 handle="k8s-pod-network.576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" host="ip-172-31-29-41" Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.734 [INFO][5030] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.132/26] handle="k8s-pod-network.576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" host="ip-172-31-29-41" Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.734 [INFO][5030] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:18:40.845151 containerd[1911]: 2024-06-25 14:18:40.734 [INFO][5030] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.115.132/26] IPv6=[] ContainerID="576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" HandleID="k8s-pod-network.576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:18:40.846545 containerd[1911]: 2024-06-25 14:18:40.739 [INFO][4994] k8s.go 386: Populated endpoint ContainerID="576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" Namespace="kube-system" Pod="coredns-5dd5756b68-rcct9" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"81a99eed-b323-4056-acb1-e2466297b4af", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"", Pod:"coredns-5dd5756b68-rcct9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd480aab9c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:40.846545 containerd[1911]: 2024-06-25 14:18:40.740 [INFO][4994] k8s.go 387: Calico CNI using IPs: [192.168.115.132/32] ContainerID="576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" Namespace="kube-system" Pod="coredns-5dd5756b68-rcct9" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:18:40.846545 containerd[1911]: 2024-06-25 14:18:40.740 [INFO][4994] dataplane_linux.go 68: Setting the host side veth name to calibd480aab9c4 ContainerID="576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" Namespace="kube-system" Pod="coredns-5dd5756b68-rcct9" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:18:40.846545 containerd[1911]: 2024-06-25 14:18:40.769 [INFO][4994] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" Namespace="kube-system" Pod="coredns-5dd5756b68-rcct9" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:18:40.846545 containerd[1911]: 2024-06-25 14:18:40.770 [INFO][4994] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" Namespace="kube-system" Pod="coredns-5dd5756b68-rcct9" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"81a99eed-b323-4056-acb1-e2466297b4af", ResourceVersion:"850", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f", Pod:"coredns-5dd5756b68-rcct9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd480aab9c4", MAC:"8e:dc:f4:f3:3e:2b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:40.846545 containerd[1911]: 2024-06-25 14:18:40.840 [INFO][4994] k8s.go 500: Wrote updated endpoint to datastore ContainerID="576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f" Namespace="kube-system" Pod="coredns-5dd5756b68-rcct9" WorkloadEndpoint="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:18:40.945452 containerd[1911]: time="2024-06-25T14:18:40.945140499Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:18:40.945452 containerd[1911]: time="2024-06-25T14:18:40.945307921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:40.945959 containerd[1911]: time="2024-06-25T14:18:40.945819764Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:18:40.946338 containerd[1911]: time="2024-06-25T14:18:40.945868376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:40.974753 kernel: kauditd_printk_skb: 37 callbacks suppressed Jun 25 14:18:40.974917 kernel: audit: type=1325 audit(1719325120.962:352): table=filter:103 family=2 entries=42 op=nft_register_chain pid=5069 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:40.962000 audit[5069]: NETFILTER_CFG table=filter:103 family=2 entries=42 op=nft_register_chain pid=5069 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:40.985699 kernel: audit: type=1300 audit(1719325120.962:352): arch=c00000b7 syscall=211 success=yes exit=21524 a0=3 a1=fffff474f970 a2=0 a3=ffffa7155fa8 items=0 ppid=4427 pid=5069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:40.985855 kernel: audit: type=1327 audit(1719325120.962:352): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:40.962000 audit[5069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21524 a0=3 a1=fffff474f970 a2=0 a3=ffffa7155fa8 items=0 ppid=4427 pid=5069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:40.962000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:41.096021 containerd[1911]: time="2024-06-25T14:18:41.095741060Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:18:41.096021 containerd[1911]: time="2024-06-25T14:18:41.095836579Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:41.096021 containerd[1911]: time="2024-06-25T14:18:41.095879659Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:18:41.096021 containerd[1911]: time="2024-06-25T14:18:41.095914302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:41.206857 systemd[1]: run-containerd-runc-k8s.io-576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f-runc.HLTf9J.mount: Deactivated successfully. Jun 25 14:18:41.219000 audit[5124]: NETFILTER_CFG table=filter:104 family=2 entries=38 op=nft_register_chain pid=5124 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:41.219000 audit[5124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19408 a0=3 a1=ffffcae87420 a2=0 a3=ffff83330fa8 items=0 ppid=4427 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:41.230259 kernel: audit: type=1325 audit(1719325121.219:353): table=filter:104 family=2 entries=38 op=nft_register_chain pid=5124 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:41.230465 kernel: audit: type=1300 audit(1719325121.219:353): arch=c00000b7 syscall=211 success=yes exit=19408 a0=3 a1=ffffcae87420 a2=0 a3=ffff83330fa8 items=0 ppid=4427 pid=5124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:41.246772 kernel: audit: type=1327 audit(1719325121.219:353): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:41.219000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:41.323424 containerd[1911]: time="2024-06-25T14:18:41.323361753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-rcct9,Uid:81a99eed-b323-4056-acb1-e2466297b4af,Namespace:kube-system,Attempt:1,} returns sandbox id \"576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f\"" Jun 25 14:18:41.325556 containerd[1911]: time="2024-06-25T14:18:41.324954246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:41.327798 containerd[1911]: time="2024-06-25T14:18:41.327714232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7210579" Jun 25 14:18:41.347105 containerd[1911]: time="2024-06-25T14:18:41.346973424Z" level=info msg="ImageCreate event name:\"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:41.359495 containerd[1911]: time="2024-06-25T14:18:41.359422165Z" level=info msg="CreateContainer within sandbox \"576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:18:41.360172 containerd[1911]: time="2024-06-25T14:18:41.360114655Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:41.366532 containerd[1911]: time="2024-06-25T14:18:41.366332143Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:41.367457 containerd[1911]: time="2024-06-25T14:18:41.367373566Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"8577147\" in 2.257295652s" Jun 25 14:18:41.367457 containerd[1911]: time="2024-06-25T14:18:41.367444653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:94ad0dc71bacd91f470c20e61073c2dc00648fd583c0fb95657dee38af05e5ed\"" Jun 25 14:18:41.390990 containerd[1911]: time="2024-06-25T14:18:41.390910129Z" level=info msg="CreateContainer within sandbox \"485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jun 25 14:18:41.398221 containerd[1911]: time="2024-06-25T14:18:41.397921194Z" level=info msg="CreateContainer within sandbox \"576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"377576d12b123951f44bd5bd9692431d1d618263828ad7bf40c0cc30a76a80c6\"" Jun 25 14:18:41.400723 containerd[1911]: time="2024-06-25T14:18:41.400590317Z" level=info msg="StartContainer for \"377576d12b123951f44bd5bd9692431d1d618263828ad7bf40c0cc30a76a80c6\"" Jun 25 14:18:41.403519 containerd[1911]: time="2024-06-25T14:18:41.403436654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-47ngv,Uid:465b00ac-d2f9-4d4f-8724-a625ed37de19,Namespace:kube-system,Attempt:1,} returns sandbox id \"125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c\"" Jun 25 14:18:41.416782 containerd[1911]: time="2024-06-25T14:18:41.416663623Z" level=info msg="CreateContainer within sandbox \"125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 25 14:18:41.419204 containerd[1911]: time="2024-06-25T14:18:41.419121104Z" level=info msg="CreateContainer within sandbox \"485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"fb8e43e09675ce135ab1afb0eef6b746b015109ee3b2b79fd5f302e5c3433285\"" Jun 25 14:18:41.422658 containerd[1911]: time="2024-06-25T14:18:41.421059626Z" level=info msg="StartContainer for \"fb8e43e09675ce135ab1afb0eef6b746b015109ee3b2b79fd5f302e5c3433285\"" Jun 25 14:18:41.439445 containerd[1911]: time="2024-06-25T14:18:41.439338523Z" level=info msg="CreateContainer within sandbox \"125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"00e5c9e4957075bc2950bd5958f1be95d69f133aecafae99caf29bb980111397\"" Jun 25 14:18:41.442749 containerd[1911]: time="2024-06-25T14:18:41.440395329Z" level=info msg="StartContainer for \"00e5c9e4957075bc2950bd5958f1be95d69f133aecafae99caf29bb980111397\"" Jun 25 14:18:41.574218 containerd[1911]: time="2024-06-25T14:18:41.574045015Z" level=info msg="StartContainer for \"377576d12b123951f44bd5bd9692431d1d618263828ad7bf40c0cc30a76a80c6\" returns successfully" Jun 25 14:18:41.634981 containerd[1911]: time="2024-06-25T14:18:41.634906694Z" level=info msg="StartContainer for \"00e5c9e4957075bc2950bd5958f1be95d69f133aecafae99caf29bb980111397\" returns successfully" Jun 25 14:18:41.692884 containerd[1911]: time="2024-06-25T14:18:41.692804978Z" level=info msg="StartContainer for \"fb8e43e09675ce135ab1afb0eef6b746b015109ee3b2b79fd5f302e5c3433285\" returns successfully" Jun 25 14:18:41.697496 containerd[1911]: time="2024-06-25T14:18:41.696598502Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Jun 25 14:18:41.862798 systemd-networkd[1599]: cali1cbb04f17a5: Gained IPv6LL Jun 25 14:18:41.998168 kubelet[3283]: I0625 14:18:41.998114 3283 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-47ngv" podStartSLOduration=51.998044999 podCreationTimestamp="2024-06-25 14:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:18:41.995901736 +0000 UTC m=+65.766989200" watchObservedRunningTime="2024-06-25 14:18:41.998044999 +0000 UTC m=+65.769132451" Jun 25 14:18:42.063107 kubelet[3283]: I0625 14:18:42.063052 3283 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-rcct9" podStartSLOduration=52.062970171 podCreationTimestamp="2024-06-25 14:17:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-06-25 14:18:42.024689708 +0000 UTC m=+65.795777172" watchObservedRunningTime="2024-06-25 14:18:42.062970171 +0000 UTC m=+65.834057623" Jun 25 14:18:42.109000 audit[5265]: NETFILTER_CFG table=filter:105 family=2 entries=14 op=nft_register_rule pid=5265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:42.109000 audit[5265]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffcc0878b0 a2=0 a3=1 items=0 ppid=3461 pid=5265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:42.119001 kernel: audit: type=1325 audit(1719325122.109:354): table=filter:105 family=2 entries=14 op=nft_register_rule pid=5265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:42.119157 kernel: audit: type=1300 audit(1719325122.109:354): arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffcc0878b0 a2=0 a3=1 items=0 ppid=3461 pid=5265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:42.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:42.122334 kernel: audit: type=1327 audit(1719325122.109:354): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:42.109000 audit[5265]: NETFILTER_CFG table=nat:106 family=2 entries=14 op=nft_register_rule pid=5265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:42.109000 audit[5265]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffcc0878b0 a2=0 a3=1 items=0 ppid=3461 pid=5265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:42.109000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:42.126773 kernel: audit: type=1325 audit(1719325122.109:355): table=nat:106 family=2 entries=14 op=nft_register_rule pid=5265 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:42.138000 audit[5267]: NETFILTER_CFG table=filter:107 family=2 entries=11 op=nft_register_rule pid=5267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:42.138000 audit[5267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffcafc4db0 a2=0 a3=1 items=0 ppid=3461 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:42.138000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:42.161000 audit[5267]: NETFILTER_CFG table=nat:108 family=2 entries=47 op=nft_register_chain pid=5267 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:42.161000 audit[5267]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffcafc4db0 a2=0 a3=1 items=0 ppid=3461 pid=5267 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:42.161000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:42.180471 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4268925007.mount: Deactivated successfully. Jun 25 14:18:42.566858 systemd-networkd[1599]: calibd480aab9c4: Gained IPv6LL Jun 25 14:18:42.723497 systemd[1]: Started sshd@13-172.31.29.41:22-139.178.68.195:38226.service - OpenSSH per-connection server daemon (139.178.68.195:38226). Jun 25 14:18:42.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.41:22-139.178.68.195:38226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:42.909000 audit[5270]: USER_ACCT pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:42.911901 sshd[5270]: Accepted publickey for core from 139.178.68.195 port 38226 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:42.913000 audit[5270]: CRED_ACQ pid=5270 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:42.913000 audit[5270]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc3721c80 a2=3 a3=1 items=0 ppid=1 pid=5270 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:42.913000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:42.916356 sshd[5270]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:42.927632 systemd-logind[1895]: New session 14 of user core. Jun 25 14:18:42.932160 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 25 14:18:42.945000 audit[5270]: USER_START pid=5270 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:42.948000 audit[5273]: CRED_ACQ pid=5273 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:43.151240 systemd[1]: run-containerd-runc-k8s.io-c55db7cadd21dcf24ea2de6b35780262fa53023b55caf6c87b265a519e25ea96-runc.bR4XGg.mount: Deactivated successfully. Jun 25 14:18:43.462511 sshd[5270]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:43.464000 audit[5270]: USER_END pid=5270 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:43.464000 audit[5270]: CRED_DISP pid=5270 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:43.469994 systemd-logind[1895]: Session 14 logged out. Waiting for processes to exit. Jun 25 14:18:43.470000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-172.31.29.41:22-139.178.68.195:38226 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:43.471547 systemd[1]: sshd@13-172.31.29.41:22-139.178.68.195:38226.service: Deactivated successfully. Jun 25 14:18:43.473478 systemd[1]: session-14.scope: Deactivated successfully. Jun 25 14:18:43.480294 systemd-logind[1895]: Removed session 14. Jun 25 14:18:43.785867 containerd[1911]: time="2024-06-25T14:18:43.785548211Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:43.789436 containerd[1911]: time="2024-06-25T14:18:43.788992565Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=9548567" Jun 25 14:18:43.791049 containerd[1911]: time="2024-06-25T14:18:43.790979111Z" level=info msg="ImageCreate event name:\"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:43.794925 containerd[1911]: time="2024-06-25T14:18:43.794833489Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:43.798817 containerd[1911]: time="2024-06-25T14:18:43.798758858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:43.802214 containerd[1911]: time="2024-06-25T14:18:43.802086956Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"10915087\" in 2.105334771s" Jun 25 14:18:43.802478 containerd[1911]: time="2024-06-25T14:18:43.802432985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:f708eddd5878891da5bc6148fc8bb3f7277210481a15957910fe5fb551a5ed28\"" Jun 25 14:18:43.810012 containerd[1911]: time="2024-06-25T14:18:43.809950162Z" level=info msg="CreateContainer within sandbox \"485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jun 25 14:18:43.866178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788742646.mount: Deactivated successfully. Jun 25 14:18:43.898261 containerd[1911]: time="2024-06-25T14:18:43.897452050Z" level=info msg="CreateContainer within sandbox \"485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"22d36324b5880cf8d3ff520d2bc82fef5733f39aa1d69c8c13ff03d2c7788521\"" Jun 25 14:18:43.900979 containerd[1911]: time="2024-06-25T14:18:43.900136498Z" level=info msg="StartContainer for \"22d36324b5880cf8d3ff520d2bc82fef5733f39aa1d69c8c13ff03d2c7788521\"" Jun 25 14:18:43.999559 systemd[1]: run-containerd-runc-k8s.io-22d36324b5880cf8d3ff520d2bc82fef5733f39aa1d69c8c13ff03d2c7788521-runc.Xua3KP.mount: Deactivated successfully. Jun 25 14:18:44.108592 containerd[1911]: time="2024-06-25T14:18:44.106727085Z" level=info msg="StartContainer for \"22d36324b5880cf8d3ff520d2bc82fef5733f39aa1d69c8c13ff03d2c7788521\" returns successfully" Jun 25 14:18:44.893312 kubelet[3283]: I0625 14:18:44.893266 3283 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jun 25 14:18:44.894095 kubelet[3283]: I0625 14:18:44.894069 3283 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jun 25 14:18:45.141719 kubelet[3283]: I0625 14:18:45.141652 3283 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-s85fn" podStartSLOduration=27.942766684 podCreationTimestamp="2024-06-25 14:18:12 +0000 UTC" firstStartedPulling="2024-06-25 14:18:38.604147988 +0000 UTC m=+62.375235416" lastFinishedPulling="2024-06-25 14:18:43.802942992 +0000 UTC m=+67.574030420" observedRunningTime="2024-06-25 14:18:45.049232957 +0000 UTC m=+68.820320421" watchObservedRunningTime="2024-06-25 14:18:45.141561688 +0000 UTC m=+68.912649152" Jun 25 14:18:45.144284 kubelet[3283]: I0625 14:18:45.144035 3283 topology_manager.go:215] "Topology Admit Handler" podUID="3ab7de26-14eb-47f7-b45a-015f80ef737a" podNamespace="calico-apiserver" podName="calico-apiserver-6c94fb546f-4thlw" Jun 25 14:18:45.300473 kubelet[3283]: I0625 14:18:45.300403 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3ab7de26-14eb-47f7-b45a-015f80ef737a-calico-apiserver-certs\") pod \"calico-apiserver-6c94fb546f-4thlw\" (UID: \"3ab7de26-14eb-47f7-b45a-015f80ef737a\") " pod="calico-apiserver/calico-apiserver-6c94fb546f-4thlw" Jun 25 14:18:45.300784 kubelet[3283]: I0625 14:18:45.300510 3283 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jr7hs\" (UniqueName: \"kubernetes.io/projected/3ab7de26-14eb-47f7-b45a-015f80ef737a-kube-api-access-jr7hs\") pod \"calico-apiserver-6c94fb546f-4thlw\" (UID: \"3ab7de26-14eb-47f7-b45a-015f80ef737a\") " pod="calico-apiserver/calico-apiserver-6c94fb546f-4thlw" Jun 25 14:18:45.401352 kubelet[3283]: E0625 14:18:45.401178 3283 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Jun 25 14:18:45.401352 kubelet[3283]: E0625 14:18:45.401310 3283 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3ab7de26-14eb-47f7-b45a-015f80ef737a-calico-apiserver-certs podName:3ab7de26-14eb-47f7-b45a-015f80ef737a nodeName:}" failed. No retries permitted until 2024-06-25 14:18:45.901276116 +0000 UTC m=+69.672363568 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/3ab7de26-14eb-47f7-b45a-015f80ef737a-calico-apiserver-certs") pod "calico-apiserver-6c94fb546f-4thlw" (UID: "3ab7de26-14eb-47f7-b45a-015f80ef737a") : secret "calico-apiserver-certs" not found Jun 25 14:18:45.525000 audit[5358]: NETFILTER_CFG table=filter:109 family=2 entries=9 op=nft_register_rule pid=5358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:45.525000 audit[5358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffdd741570 a2=0 a3=1 items=0 ppid=3461 pid=5358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:45.525000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:45.529000 audit[5358]: NETFILTER_CFG table=nat:110 family=2 entries=20 op=nft_register_rule pid=5358 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:45.529000 audit[5358]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdd741570 a2=0 a3=1 items=0 ppid=3461 pid=5358 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:45.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:45.559000 audit[5362]: NETFILTER_CFG table=filter:111 family=2 entries=10 op=nft_register_rule pid=5362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:45.559000 audit[5362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffc7956160 a2=0 a3=1 items=0 ppid=3461 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:45.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:45.571000 audit[5362]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=5362 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:45.571000 audit[5362]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffc7956160 a2=0 a3=1 items=0 ppid=3461 pid=5362 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:45.571000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:46.084240 containerd[1911]: time="2024-06-25T14:18:46.084157301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c94fb546f-4thlw,Uid:3ab7de26-14eb-47f7-b45a-015f80ef737a,Namespace:calico-apiserver,Attempt:0,}" Jun 25 14:18:46.375010 (udev-worker)[5385]: Network interface NamePolicy= disabled on kernel command line. Jun 25 14:18:46.376018 systemd-networkd[1599]: cali7433f95052e: Link UP Jun 25 14:18:46.383378 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jun 25 14:18:46.383531 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali7433f95052e: link becomes ready Jun 25 14:18:46.383837 systemd-networkd[1599]: cali7433f95052e: Gained carrier Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.209 [INFO][5365] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-eth0 calico-apiserver-6c94fb546f- calico-apiserver 3ab7de26-14eb-47f7-b45a-015f80ef737a 958 0 2024-06-25 14:18:45 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c94fb546f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-29-41 calico-apiserver-6c94fb546f-4thlw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7433f95052e [] []}} ContainerID="7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" Namespace="calico-apiserver" Pod="calico-apiserver-6c94fb546f-4thlw" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-" Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.210 [INFO][5365] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" Namespace="calico-apiserver" Pod="calico-apiserver-6c94fb546f-4thlw" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-eth0" Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.284 [INFO][5376] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" HandleID="k8s-pod-network.7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" Workload="ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-eth0" Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.302 [INFO][5376] ipam_plugin.go 264: Auto assigning IP ContainerID="7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" HandleID="k8s-pod-network.7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" Workload="ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331ee0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-29-41", "pod":"calico-apiserver-6c94fb546f-4thlw", "timestamp":"2024-06-25 14:18:46.284313748 +0000 UTC"}, Hostname:"ip-172-31-29-41", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.303 [INFO][5376] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.303 [INFO][5376] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.303 [INFO][5376] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-29-41' Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.306 [INFO][5376] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" host="ip-172-31-29-41" Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.315 [INFO][5376] ipam.go 372: Looking up existing affinities for host host="ip-172-31-29-41" Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.325 [INFO][5376] ipam.go 489: Trying affinity for 192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.328 [INFO][5376] ipam.go 155: Attempting to load block cidr=192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.332 [INFO][5376] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.128/26 host="ip-172-31-29-41" Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.333 [INFO][5376] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.128/26 handle="k8s-pod-network.7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" host="ip-172-31-29-41" Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.336 [INFO][5376] ipam.go 1685: Creating new handle: k8s-pod-network.7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.343 [INFO][5376] ipam.go 1203: Writing block in order to claim IPs block=192.168.115.128/26 handle="k8s-pod-network.7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" host="ip-172-31-29-41" Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.359 [INFO][5376] ipam.go 1216: Successfully claimed IPs: [192.168.115.133/26] block=192.168.115.128/26 handle="k8s-pod-network.7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" host="ip-172-31-29-41" Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.359 [INFO][5376] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.133/26] handle="k8s-pod-network.7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" host="ip-172-31-29-41" Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.359 [INFO][5376] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:18:46.414522 containerd[1911]: 2024-06-25 14:18:46.359 [INFO][5376] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.115.133/26] IPv6=[] ContainerID="7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" HandleID="k8s-pod-network.7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" Workload="ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-eth0" Jun 25 14:18:46.416149 containerd[1911]: 2024-06-25 14:18:46.362 [INFO][5365] k8s.go 386: Populated endpoint ContainerID="7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" Namespace="calico-apiserver" Pod="calico-apiserver-6c94fb546f-4thlw" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-eth0", GenerateName:"calico-apiserver-6c94fb546f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ab7de26-14eb-47f7-b45a-015f80ef737a", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 18, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c94fb546f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"", Pod:"calico-apiserver-6c94fb546f-4thlw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7433f95052e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:46.416149 containerd[1911]: 2024-06-25 14:18:46.362 [INFO][5365] k8s.go 387: Calico CNI using IPs: [192.168.115.133/32] ContainerID="7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" Namespace="calico-apiserver" Pod="calico-apiserver-6c94fb546f-4thlw" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-eth0" Jun 25 14:18:46.416149 containerd[1911]: 2024-06-25 14:18:46.363 [INFO][5365] dataplane_linux.go 68: Setting the host side veth name to cali7433f95052e ContainerID="7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" Namespace="calico-apiserver" Pod="calico-apiserver-6c94fb546f-4thlw" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-eth0" Jun 25 14:18:46.416149 containerd[1911]: 2024-06-25 14:18:46.378 [INFO][5365] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" Namespace="calico-apiserver" Pod="calico-apiserver-6c94fb546f-4thlw" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-eth0" Jun 25 14:18:46.416149 containerd[1911]: 2024-06-25 14:18:46.378 [INFO][5365] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" Namespace="calico-apiserver" Pod="calico-apiserver-6c94fb546f-4thlw" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-eth0", GenerateName:"calico-apiserver-6c94fb546f-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ab7de26-14eb-47f7-b45a-015f80ef737a", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 18, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c94fb546f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff", Pod:"calico-apiserver-6c94fb546f-4thlw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7433f95052e", MAC:"ce:5c:c4:39:26:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:18:46.416149 containerd[1911]: 2024-06-25 14:18:46.404 [INFO][5365] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff" Namespace="calico-apiserver" Pod="calico-apiserver-6c94fb546f-4thlw" WorkloadEndpoint="ip--172--31--29--41-k8s-calico--apiserver--6c94fb546f--4thlw-eth0" Jun 25 14:18:46.467506 containerd[1911]: time="2024-06-25T14:18:46.467339961Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jun 25 14:18:46.467881 containerd[1911]: time="2024-06-25T14:18:46.467783285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:46.468155 containerd[1911]: time="2024-06-25T14:18:46.468072459Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jun 25 14:18:46.468402 containerd[1911]: time="2024-06-25T14:18:46.468313789Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jun 25 14:18:46.490132 kernel: kauditd_printk_skb: 31 callbacks suppressed Jun 25 14:18:46.490285 kernel: audit: type=1325 audit(1719325126.485:371): table=filter:113 family=2 entries=55 op=nft_register_chain pid=5420 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:46.485000 audit[5420]: NETFILTER_CFG table=filter:113 family=2 entries=55 op=nft_register_chain pid=5420 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jun 25 14:18:46.485000 audit[5420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27464 a0=3 a1=ffffc8fe2290 a2=0 a3=ffff9678afa8 items=0 ppid=4427 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:46.497239 kernel: audit: type=1300 audit(1719325126.485:371): arch=c00000b7 syscall=211 success=yes exit=27464 a0=3 a1=ffffc8fe2290 a2=0 a3=ffff9678afa8 items=0 ppid=4427 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:46.485000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:46.505487 kernel: audit: type=1327 audit(1719325126.485:371): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jun 25 14:18:46.556317 systemd[1]: run-containerd-runc-k8s.io-7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff-runc.fAfYlt.mount: Deactivated successfully. Jun 25 14:18:46.622302 containerd[1911]: time="2024-06-25T14:18:46.622248024Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c94fb546f-4thlw,Uid:3ab7de26-14eb-47f7-b45a-015f80ef737a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff\"" Jun 25 14:18:46.625670 containerd[1911]: time="2024-06-25T14:18:46.625473346Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Jun 25 14:18:47.558872 systemd-networkd[1599]: cali7433f95052e: Gained IPv6LL Jun 25 14:18:48.490000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.41:22-139.178.68.195:33660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:48.491354 systemd[1]: Started sshd@14-172.31.29.41:22-139.178.68.195:33660.service - OpenSSH per-connection server daemon (139.178.68.195:33660). Jun 25 14:18:48.497083 kernel: audit: type=1130 audit(1719325128.490:372): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.41:22-139.178.68.195:33660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:48.714000 audit[5448]: USER_ACCT pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:48.725874 sshd[5448]: Accepted publickey for core from 139.178.68.195 port 33660 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:48.727682 kernel: audit: type=1101 audit(1719325128.714:373): pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:48.726000 audit[5448]: CRED_ACQ pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:48.734327 sshd[5448]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:48.739319 kernel: audit: type=1103 audit(1719325128.726:374): pid=5448 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:48.739462 kernel: audit: type=1006 audit(1719325128.726:375): pid=5448 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jun 25 14:18:48.745069 kernel: audit: type=1300 audit(1719325128.726:375): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcdcfc1e0 a2=3 a3=1 items=0 ppid=1 pid=5448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:48.726000 audit[5448]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcdcfc1e0 a2=3 a3=1 items=0 ppid=1 pid=5448 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:48.726000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:48.755888 kernel: audit: type=1327 audit(1719325128.726:375): proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:48.764689 systemd-logind[1895]: New session 15 of user core. Jun 25 14:18:48.769325 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 25 14:18:48.795000 audit[5448]: USER_START pid=5448 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:48.803742 kernel: audit: type=1105 audit(1719325128.795:376): pid=5448 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:48.801000 audit[5451]: CRED_ACQ pid=5451 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:49.253226 sshd[5448]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:49.256000 audit[5448]: USER_END pid=5448 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:49.256000 audit[5448]: CRED_DISP pid=5448 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:49.260904 systemd[1]: sshd@14-172.31.29.41:22-139.178.68.195:33660.service: Deactivated successfully. Jun 25 14:18:49.259000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-172.31.29.41:22-139.178.68.195:33660 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:49.263519 systemd[1]: session-15.scope: Deactivated successfully. Jun 25 14:18:49.263553 systemd-logind[1895]: Session 15 logged out. Waiting for processes to exit. Jun 25 14:18:49.267397 systemd-logind[1895]: Removed session 15. Jun 25 14:18:49.683264 containerd[1911]: time="2024-06-25T14:18:49.683184451Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:49.685205 containerd[1911]: time="2024-06-25T14:18:49.685130309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=37831527" Jun 25 14:18:49.686013 containerd[1911]: time="2024-06-25T14:18:49.685974719Z" level=info msg="ImageCreate event name:\"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:49.689153 containerd[1911]: time="2024-06-25T14:18:49.689092044Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:49.692792 containerd[1911]: time="2024-06-25T14:18:49.692733586Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 25 14:18:49.697745 containerd[1911]: time="2024-06-25T14:18:49.697670134Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"39198111\" in 3.071792631s" Jun 25 14:18:49.697996 containerd[1911]: time="2024-06-25T14:18:49.697742854Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:cfbcd2d846bffa8495396cef27ce876ed8ebd8e36f660b8dd9326c1ff4d770ac\"" Jun 25 14:18:49.704325 containerd[1911]: time="2024-06-25T14:18:49.704252578Z" level=info msg="CreateContainer within sandbox \"7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jun 25 14:18:49.730221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2797891524.mount: Deactivated successfully. Jun 25 14:18:49.735932 containerd[1911]: time="2024-06-25T14:18:49.735821110Z" level=info msg="CreateContainer within sandbox \"7958ede14fd1b4412dc03a824bd01759f450242008e01df9c704169c3b326fff\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"3c36829158122ef5749f85d08bfbf276fb87f738d4bc9f8dc1fa89d971cc1b3a\"" Jun 25 14:18:49.740709 containerd[1911]: time="2024-06-25T14:18:49.738487610Z" level=info msg="StartContainer for \"3c36829158122ef5749f85d08bfbf276fb87f738d4bc9f8dc1fa89d971cc1b3a\"" Jun 25 14:18:49.905242 containerd[1911]: time="2024-06-25T14:18:49.905178918Z" level=info msg="StartContainer for \"3c36829158122ef5749f85d08bfbf276fb87f738d4bc9f8dc1fa89d971cc1b3a\" returns successfully" Jun 25 14:18:50.066460 kubelet[3283]: I0625 14:18:50.066287 3283 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c94fb546f-4thlw" podStartSLOduration=1.992901231 podCreationTimestamp="2024-06-25 14:18:45 +0000 UTC" firstStartedPulling="2024-06-25 14:18:46.624811011 +0000 UTC m=+70.395898451" lastFinishedPulling="2024-06-25 14:18:49.698116291 +0000 UTC m=+73.469203731" observedRunningTime="2024-06-25 14:18:50.062992513 +0000 UTC m=+73.834079965" watchObservedRunningTime="2024-06-25 14:18:50.066206511 +0000 UTC m=+73.837293951" Jun 25 14:18:50.123000 audit[5502]: NETFILTER_CFG table=filter:114 family=2 entries=10 op=nft_register_rule pid=5502 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:50.123000 audit[5502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffe7d4b7a0 a2=0 a3=1 items=0 ppid=3461 pid=5502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:50.123000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:50.126000 audit[5502]: NETFILTER_CFG table=nat:115 family=2 entries=20 op=nft_register_rule pid=5502 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:50.126000 audit[5502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe7d4b7a0 a2=0 a3=1 items=0 ppid=3461 pid=5502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:50.126000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:52.038000 audit[5509]: NETFILTER_CFG table=filter:116 family=2 entries=9 op=nft_register_rule pid=5509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:52.041164 kernel: kauditd_printk_skb: 10 callbacks suppressed Jun 25 14:18:52.041307 kernel: audit: type=1325 audit(1719325132.038:383): table=filter:116 family=2 entries=9 op=nft_register_rule pid=5509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:52.038000 audit[5509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd0a56780 a2=0 a3=1 items=0 ppid=3461 pid=5509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:52.056139 kernel: audit: type=1300 audit(1719325132.038:383): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd0a56780 a2=0 a3=1 items=0 ppid=3461 pid=5509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:52.056294 kernel: audit: type=1327 audit(1719325132.038:383): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:52.038000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:52.059000 audit[5509]: NETFILTER_CFG table=nat:117 family=2 entries=27 op=nft_register_chain pid=5509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:52.066667 kernel: audit: type=1325 audit(1719325132.059:384): table=nat:117 family=2 entries=27 op=nft_register_chain pid=5509 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:52.066834 kernel: audit: type=1300 audit(1719325132.059:384): arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffd0a56780 a2=0 a3=1 items=0 ppid=3461 pid=5509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:52.059000 audit[5509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffd0a56780 a2=0 a3=1 items=0 ppid=3461 pid=5509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:52.059000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:52.086654 kernel: audit: type=1327 audit(1719325132.059:384): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:54.285364 systemd[1]: Started sshd@15-172.31.29.41:22-139.178.68.195:33664.service - OpenSSH per-connection server daemon (139.178.68.195:33664). Jun 25 14:18:54.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.41:22-139.178.68.195:33664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:54.291674 kernel: audit: type=1130 audit(1719325134.284:385): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.41:22-139.178.68.195:33664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:54.469000 audit[5529]: USER_ACCT pid=5529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:54.471607 sshd[5529]: Accepted publickey for core from 139.178.68.195 port 33664 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:54.475643 sshd[5529]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:54.469000 audit[5529]: CRED_ACQ pid=5529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:54.480658 kernel: audit: type=1101 audit(1719325134.469:386): pid=5529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:54.480829 kernel: audit: type=1103 audit(1719325134.469:387): pid=5529 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:54.484500 kernel: audit: type=1006 audit(1719325134.469:388): pid=5529 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jun 25 14:18:54.469000 audit[5529]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcca5fe00 a2=3 a3=1 items=0 ppid=1 pid=5529 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:54.469000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:54.490828 systemd-logind[1895]: New session 16 of user core. Jun 25 14:18:54.497275 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 25 14:18:54.513000 audit[5529]: USER_START pid=5529 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:54.517000 audit[5532]: CRED_ACQ pid=5532 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:54.795652 sshd[5529]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:54.796000 audit[5529]: USER_END pid=5529 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:54.796000 audit[5529]: CRED_DISP pid=5529 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:54.801191 systemd[1]: sshd@15-172.31.29.41:22-139.178.68.195:33664.service: Deactivated successfully. Jun 25 14:18:54.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-172.31.29.41:22-139.178.68.195:33664 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:54.804022 systemd-logind[1895]: Session 16 logged out. Waiting for processes to exit. Jun 25 14:18:54.804061 systemd[1]: session-16.scope: Deactivated successfully. Jun 25 14:18:54.807576 systemd-logind[1895]: Removed session 16. Jun 25 14:18:54.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.41:22-139.178.68.195:33676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:54.821496 systemd[1]: Started sshd@16-172.31.29.41:22-139.178.68.195:33676.service - OpenSSH per-connection server daemon (139.178.68.195:33676). Jun 25 14:18:54.992734 sshd[5542]: Accepted publickey for core from 139.178.68.195 port 33676 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:54.990000 audit[5542]: USER_ACCT pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:54.995000 audit[5542]: CRED_ACQ pid=5542 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:54.995000 audit[5542]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff07473d0 a2=3 a3=1 items=0 ppid=1 pid=5542 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:54.995000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:54.998325 sshd[5542]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:55.010639 systemd-logind[1895]: New session 17 of user core. Jun 25 14:18:55.014231 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 25 14:18:55.023000 audit[5542]: USER_START pid=5542 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:55.027000 audit[5545]: CRED_ACQ pid=5545 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:55.518338 sshd[5542]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:55.519000 audit[5542]: USER_END pid=5542 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:55.519000 audit[5542]: CRED_DISP pid=5542 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:55.524240 systemd[1]: sshd@16-172.31.29.41:22-139.178.68.195:33676.service: Deactivated successfully. Jun 25 14:18:55.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-172.31.29.41:22-139.178.68.195:33676 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:55.526447 systemd[1]: session-17.scope: Deactivated successfully. Jun 25 14:18:55.527253 systemd-logind[1895]: Session 17 logged out. Waiting for processes to exit. Jun 25 14:18:55.530847 systemd-logind[1895]: Removed session 17. Jun 25 14:18:55.550586 systemd[1]: Started sshd@17-172.31.29.41:22-139.178.68.195:33692.service - OpenSSH per-connection server daemon (139.178.68.195:33692). Jun 25 14:18:55.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.41:22-139.178.68.195:33692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:55.740000 audit[5559]: USER_ACCT pid=5559 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:55.742264 sshd[5559]: Accepted publickey for core from 139.178.68.195 port 33692 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:55.742000 audit[5559]: CRED_ACQ pid=5559 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:55.742000 audit[5559]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc8458e90 a2=3 a3=1 items=0 ppid=1 pid=5559 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:55.742000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:55.745018 sshd[5559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:55.756136 systemd-logind[1895]: New session 18 of user core. Jun 25 14:18:55.766208 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 25 14:18:55.777000 audit[5559]: USER_START pid=5559 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:55.781000 audit[5563]: CRED_ACQ pid=5563 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:56.055000 audit[5569]: NETFILTER_CFG table=filter:118 family=2 entries=8 op=nft_register_rule pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:56.055000 audit[5569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc291c810 a2=0 a3=1 items=0 ppid=3461 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:56.055000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:56.059000 audit[5569]: NETFILTER_CFG table=nat:119 family=2 entries=30 op=nft_register_rule pid=5569 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:56.059000 audit[5569]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffc291c810 a2=0 a3=1 items=0 ppid=3461 pid=5569 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:56.059000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:58.562683 kernel: kauditd_printk_skb: 32 callbacks suppressed Jun 25 14:18:58.562892 kernel: audit: type=1325 audit(1719325138.559:411): table=filter:120 family=2 entries=20 op=nft_register_rule pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:58.559000 audit[5582]: NETFILTER_CFG table=filter:120 family=2 entries=20 op=nft_register_rule pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:58.559000 audit[5582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffecf05d60 a2=0 a3=1 items=0 ppid=3461 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:58.570446 kernel: audit: type=1300 audit(1719325138.559:411): arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffecf05d60 a2=0 a3=1 items=0 ppid=3461 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:58.559000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:58.574467 kernel: audit: type=1327 audit(1719325138.559:411): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:58.574326 sshd[5559]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:58.576000 audit[5582]: NETFILTER_CFG table=nat:121 family=2 entries=22 op=nft_register_rule pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:58.584406 systemd[1]: sshd@17-172.31.29.41:22-139.178.68.195:33692.service: Deactivated successfully. Jun 25 14:18:58.586053 systemd[1]: session-18.scope: Deactivated successfully. Jun 25 14:18:58.587526 systemd-logind[1895]: Session 18 logged out. Waiting for processes to exit. Jun 25 14:18:58.588976 kernel: audit: type=1325 audit(1719325138.576:412): table=nat:121 family=2 entries=22 op=nft_register_rule pid=5582 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:58.593822 systemd-logind[1895]: Removed session 18. Jun 25 14:18:58.606269 kernel: audit: type=1300 audit(1719325138.576:412): arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffecf05d60 a2=0 a3=1 items=0 ppid=3461 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:58.576000 audit[5582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffecf05d60 a2=0 a3=1 items=0 ppid=3461 pid=5582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:58.613532 systemd[1]: Started sshd@18-172.31.29.41:22-139.178.68.195:49454.service - OpenSSH per-connection server daemon (139.178.68.195:49454). Jun 25 14:18:58.576000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:58.617552 kernel: audit: type=1327 audit(1719325138.576:412): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:58.579000 audit[5559]: USER_END pid=5559 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:58.626832 kernel: audit: type=1106 audit(1719325138.579:413): pid=5559 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:58.579000 audit[5559]: CRED_DISP pid=5559 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:58.634065 kernel: audit: type=1104 audit(1719325138.579:414): pid=5559 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:58.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.41:22-139.178.68.195:33692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:58.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.41:22-139.178.68.195:49454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:58.650760 kernel: audit: type=1131 audit(1719325138.583:415): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-172.31.29.41:22-139.178.68.195:33692 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:58.655674 kernel: audit: type=1130 audit(1719325138.612:416): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.41:22-139.178.68.195:49454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:58.673000 audit[5588]: NETFILTER_CFG table=filter:122 family=2 entries=32 op=nft_register_rule pid=5588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:58.673000 audit[5588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffd93125b0 a2=0 a3=1 items=0 ppid=3461 pid=5588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:58.673000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:58.675000 audit[5588]: NETFILTER_CFG table=nat:123 family=2 entries=22 op=nft_register_rule pid=5588 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:18:58.675000 audit[5588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffd93125b0 a2=0 a3=1 items=0 ppid=3461 pid=5588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:58.675000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:18:58.810000 audit[5585]: USER_ACCT pid=5585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:58.812094 sshd[5585]: Accepted publickey for core from 139.178.68.195 port 49454 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:58.812000 audit[5585]: CRED_ACQ pid=5585 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:58.812000 audit[5585]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe10bc200 a2=3 a3=1 items=0 ppid=1 pid=5585 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:58.812000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:58.816338 sshd[5585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:58.826760 systemd-logind[1895]: New session 19 of user core. Jun 25 14:18:58.829206 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 25 14:18:58.841000 audit[5585]: USER_START pid=5585 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:58.844000 audit[5590]: CRED_ACQ pid=5590 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:59.524764 sshd[5585]: pam_unix(sshd:session): session closed for user core Jun 25 14:18:59.527000 audit[5585]: USER_END pid=5585 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:59.528000 audit[5585]: CRED_DISP pid=5585 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:59.531000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-172.31.29.41:22-139.178.68.195:49454 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:59.532791 systemd[1]: sshd@18-172.31.29.41:22-139.178.68.195:49454.service: Deactivated successfully. Jun 25 14:18:59.534531 systemd[1]: session-19.scope: Deactivated successfully. Jun 25 14:18:59.541464 systemd-logind[1895]: Session 19 logged out. Waiting for processes to exit. Jun 25 14:18:59.546023 systemd-logind[1895]: Removed session 19. Jun 25 14:18:59.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.41:22-139.178.68.195:49470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:18:59.557447 systemd[1]: Started sshd@19-172.31.29.41:22-139.178.68.195:49470.service - OpenSSH per-connection server daemon (139.178.68.195:49470). Jun 25 14:18:59.732000 audit[5598]: USER_ACCT pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:59.736089 sshd[5598]: Accepted publickey for core from 139.178.68.195 port 49470 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:18:59.735000 audit[5598]: CRED_ACQ pid=5598 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:59.735000 audit[5598]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffef10a590 a2=3 a3=1 items=0 ppid=1 pid=5598 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:18:59.735000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:18:59.737920 sshd[5598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:18:59.747248 systemd-logind[1895]: New session 20 of user core. Jun 25 14:18:59.752188 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 25 14:18:59.768000 audit[5598]: USER_START pid=5598 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:18:59.771000 audit[5601]: CRED_ACQ pid=5601 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:00.008971 sshd[5598]: pam_unix(sshd:session): session closed for user core Jun 25 14:19:00.011000 audit[5598]: USER_END pid=5598 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:00.011000 audit[5598]: CRED_DISP pid=5598 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:00.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-172.31.29.41:22-139.178.68.195:49470 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:00.017168 systemd[1]: sshd@19-172.31.29.41:22-139.178.68.195:49470.service: Deactivated successfully. Jun 25 14:19:00.019798 systemd[1]: session-20.scope: Deactivated successfully. Jun 25 14:19:00.020586 systemd-logind[1895]: Session 20 logged out. Waiting for processes to exit. Jun 25 14:19:00.023125 systemd-logind[1895]: Removed session 20. Jun 25 14:19:00.038000 audit[5612]: NETFILTER_CFG table=filter:124 family=2 entries=32 op=nft_register_rule pid=5612 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:19:00.038000 audit[5612]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffe0cf0200 a2=0 a3=1 items=0 ppid=3461 pid=5612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:00.038000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:19:00.040000 audit[5612]: NETFILTER_CFG table=nat:125 family=2 entries=22 op=nft_register_rule pid=5612 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:19:00.040000 audit[5612]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffe0cf0200 a2=0 a3=1 items=0 ppid=3461 pid=5612 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:00.040000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:19:05.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.41:22-139.178.68.195:49482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:05.041493 systemd[1]: Started sshd@20-172.31.29.41:22-139.178.68.195:49482.service - OpenSSH per-connection server daemon (139.178.68.195:49482). Jun 25 14:19:05.043155 kernel: kauditd_printk_skb: 33 callbacks suppressed Jun 25 14:19:05.043209 kernel: audit: type=1130 audit(1719325145.041:438): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.41:22-139.178.68.195:49482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:05.217000 audit[5614]: USER_ACCT pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:05.223732 kernel: audit: type=1101 audit(1719325145.217:439): pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:05.223913 sshd[5614]: Accepted publickey for core from 139.178.68.195 port 49482 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:19:05.224000 audit[5614]: CRED_ACQ pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:05.226311 sshd[5614]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:19:05.233183 kernel: audit: type=1103 audit(1719325145.224:440): pid=5614 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:05.233326 kernel: audit: type=1006 audit(1719325145.224:441): pid=5614 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jun 25 14:19:05.224000 audit[5614]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeecedbc0 a2=3 a3=1 items=0 ppid=1 pid=5614 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:05.238379 kernel: audit: type=1300 audit(1719325145.224:441): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeecedbc0 a2=3 a3=1 items=0 ppid=1 pid=5614 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:05.238561 kernel: audit: type=1327 audit(1719325145.224:441): proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:05.224000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:05.245656 systemd-logind[1895]: New session 21 of user core. Jun 25 14:19:05.252168 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 25 14:19:05.262000 audit[5614]: USER_START pid=5614 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:05.269670 kernel: audit: type=1105 audit(1719325145.262:442): pid=5614 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:05.270000 audit[5617]: CRED_ACQ pid=5617 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:05.276711 kernel: audit: type=1103 audit(1719325145.270:443): pid=5617 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:05.511486 sshd[5614]: pam_unix(sshd:session): session closed for user core Jun 25 14:19:05.513000 audit[5614]: USER_END pid=5614 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:05.514000 audit[5614]: CRED_DISP pid=5614 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:05.520392 systemd[1]: sshd@20-172.31.29.41:22-139.178.68.195:49482.service: Deactivated successfully. Jun 25 14:19:05.524323 kernel: audit: type=1106 audit(1719325145.513:444): pid=5614 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:05.524433 kernel: audit: type=1104 audit(1719325145.514:445): pid=5614 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:05.525754 systemd-logind[1895]: Session 21 logged out. Waiting for processes to exit. Jun 25 14:19:05.525855 systemd[1]: session-21.scope: Deactivated successfully. Jun 25 14:19:05.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-172.31.29.41:22-139.178.68.195:49482 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:05.529785 systemd-logind[1895]: Removed session 21. Jun 25 14:19:10.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.41:22-139.178.68.195:33278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:10.540416 systemd[1]: Started sshd@21-172.31.29.41:22-139.178.68.195:33278.service - OpenSSH per-connection server daemon (139.178.68.195:33278). Jun 25 14:19:10.541954 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:19:10.542017 kernel: audit: type=1130 audit(1719325150.540:447): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.41:22-139.178.68.195:33278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:10.712000 audit[5634]: USER_ACCT pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:10.713459 sshd[5634]: Accepted publickey for core from 139.178.68.195 port 33278 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:19:10.718767 kernel: audit: type=1101 audit(1719325150.712:448): pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:10.719000 audit[5634]: CRED_ACQ pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:10.722203 sshd[5634]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:19:10.724683 kernel: audit: type=1103 audit(1719325150.719:449): pid=5634 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:10.724789 kernel: audit: type=1006 audit(1719325150.720:450): pid=5634 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jun 25 14:19:10.720000 audit[5634]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc488a8e0 a2=3 a3=1 items=0 ppid=1 pid=5634 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:10.732424 kernel: audit: type=1300 audit(1719325150.720:450): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc488a8e0 a2=3 a3=1 items=0 ppid=1 pid=5634 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:10.720000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:10.734349 kernel: audit: type=1327 audit(1719325150.720:450): proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:10.740906 systemd-logind[1895]: New session 22 of user core. Jun 25 14:19:10.748332 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 25 14:19:10.761000 audit[5634]: USER_START pid=5634 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:10.767843 kernel: audit: type=1105 audit(1719325150.761:451): pid=5634 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:10.767000 audit[5637]: CRED_ACQ pid=5637 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:10.773845 kernel: audit: type=1103 audit(1719325150.767:452): pid=5637 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:11.012021 sshd[5634]: pam_unix(sshd:session): session closed for user core Jun 25 14:19:11.014000 audit[5634]: USER_END pid=5634 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:11.015000 audit[5634]: CRED_DISP pid=5634 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:11.019128 systemd[1]: sshd@21-172.31.29.41:22-139.178.68.195:33278.service: Deactivated successfully. Jun 25 14:19:11.021174 systemd[1]: session-22.scope: Deactivated successfully. Jun 25 14:19:11.024523 kernel: audit: type=1106 audit(1719325151.014:453): pid=5634 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:11.024714 kernel: audit: type=1104 audit(1719325151.015:454): pid=5634 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:11.024783 systemd-logind[1895]: Session 22 logged out. Waiting for processes to exit. Jun 25 14:19:11.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-172.31.29.41:22-139.178.68.195:33278 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:11.028783 systemd-logind[1895]: Removed session 22. Jun 25 14:19:13.130764 systemd[1]: run-containerd-runc-k8s.io-c55db7cadd21dcf24ea2de6b35780262fa53023b55caf6c87b265a519e25ea96-runc.EMaMCG.mount: Deactivated successfully. Jun 25 14:19:16.042398 systemd[1]: Started sshd@22-172.31.29.41:22-139.178.68.195:33294.service - OpenSSH per-connection server daemon (139.178.68.195:33294). Jun 25 14:19:16.048658 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:19:16.048712 kernel: audit: type=1130 audit(1719325156.042:456): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.41:22-139.178.68.195:33294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:16.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.41:22-139.178.68.195:33294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:16.220000 audit[5675]: USER_ACCT pid=5675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:16.221356 sshd[5675]: Accepted publickey for core from 139.178.68.195 port 33294 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:19:16.225178 sshd[5675]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:19:16.220000 audit[5675]: CRED_ACQ pid=5675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:16.229759 kernel: audit: type=1101 audit(1719325156.220:457): pid=5675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:16.229876 kernel: audit: type=1103 audit(1719325156.220:458): pid=5675 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:16.235120 kernel: audit: type=1006 audit(1719325156.220:459): pid=5675 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jun 25 14:19:16.235219 kernel: audit: type=1300 audit(1719325156.220:459): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4612f60 a2=3 a3=1 items=0 ppid=1 pid=5675 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:16.220000 audit[5675]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4612f60 a2=3 a3=1 items=0 ppid=1 pid=5675 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:16.238414 systemd-logind[1895]: New session 23 of user core. Jun 25 14:19:16.242147 kernel: audit: type=1327 audit(1719325156.220:459): proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:16.220000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:16.242324 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 25 14:19:16.252000 audit[5675]: USER_START pid=5675 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:16.258000 audit[5678]: CRED_ACQ pid=5678 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:16.263720 kernel: audit: type=1105 audit(1719325156.252:460): pid=5675 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:16.263823 kernel: audit: type=1103 audit(1719325156.258:461): pid=5678 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:16.507660 sshd[5675]: pam_unix(sshd:session): session closed for user core Jun 25 14:19:16.509000 audit[5675]: USER_END pid=5675 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:16.516063 systemd[1]: sshd@22-172.31.29.41:22-139.178.68.195:33294.service: Deactivated successfully. Jun 25 14:19:16.518545 systemd-logind[1895]: Session 23 logged out. Waiting for processes to exit. Jun 25 14:19:16.518854 systemd[1]: session-23.scope: Deactivated successfully. Jun 25 14:19:16.509000 audit[5675]: CRED_DISP pid=5675 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:16.524133 kernel: audit: type=1106 audit(1719325156.509:462): pid=5675 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:16.524306 kernel: audit: type=1104 audit(1719325156.509:463): pid=5675 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:16.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-172.31.29.41:22-139.178.68.195:33294 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:16.526258 systemd-logind[1895]: Removed session 23. Jun 25 14:19:21.544405 systemd[1]: Started sshd@23-172.31.29.41:22-139.178.68.195:32792.service - OpenSSH per-connection server daemon (139.178.68.195:32792). Jun 25 14:19:21.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.41:22-139.178.68.195:32792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:21.547644 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:19:21.547757 kernel: audit: type=1130 audit(1719325161.544:465): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.41:22-139.178.68.195:32792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:21.721000 audit[5695]: USER_ACCT pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:21.722653 sshd[5695]: Accepted publickey for core from 139.178.68.195 port 32792 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:19:21.726087 sshd[5695]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:19:21.724000 audit[5695]: CRED_ACQ pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:21.730797 kernel: audit: type=1101 audit(1719325161.721:466): pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:21.730910 kernel: audit: type=1103 audit(1719325161.724:467): pid=5695 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:21.733771 kernel: audit: type=1006 audit(1719325161.724:468): pid=5695 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jun 25 14:19:21.724000 audit[5695]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe272cda0 a2=3 a3=1 items=0 ppid=1 pid=5695 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:21.738798 kernel: audit: type=1300 audit(1719325161.724:468): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe272cda0 a2=3 a3=1 items=0 ppid=1 pid=5695 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:21.724000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:21.741080 kernel: audit: type=1327 audit(1719325161.724:468): proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:21.746501 systemd-logind[1895]: New session 24 of user core. Jun 25 14:19:21.756226 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 25 14:19:21.768000 audit[5695]: USER_START pid=5695 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:21.772000 audit[5698]: CRED_ACQ pid=5698 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:21.778548 kernel: audit: type=1105 audit(1719325161.768:469): pid=5695 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:21.778707 kernel: audit: type=1103 audit(1719325161.772:470): pid=5698 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:22.028003 sshd[5695]: pam_unix(sshd:session): session closed for user core Jun 25 14:19:22.029000 audit[5695]: USER_END pid=5695 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:22.035167 systemd[1]: sshd@23-172.31.29.41:22-139.178.68.195:32792.service: Deactivated successfully. Jun 25 14:19:22.031000 audit[5695]: CRED_DISP pid=5695 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:22.040773 kernel: audit: type=1106 audit(1719325162.029:471): pid=5695 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:22.040908 kernel: audit: type=1104 audit(1719325162.031:472): pid=5695 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:22.038446 systemd[1]: session-24.scope: Deactivated successfully. Jun 25 14:19:22.041167 systemd-logind[1895]: Session 24 logged out. Waiting for processes to exit. Jun 25 14:19:22.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-172.31.29.41:22-139.178.68.195:32792 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:22.043050 systemd-logind[1895]: Removed session 24. Jun 25 14:19:22.102000 audit[5709]: NETFILTER_CFG table=filter:126 family=2 entries=20 op=nft_register_rule pid=5709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:19:22.102000 audit[5709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffff968f50 a2=0 a3=1 items=0 ppid=3461 pid=5709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:22.102000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:19:22.108000 audit[5709]: NETFILTER_CFG table=nat:127 family=2 entries=106 op=nft_register_chain pid=5709 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:19:22.108000 audit[5709]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffff968f50 a2=0 a3=1 items=0 ppid=3461 pid=5709 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:22.108000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:19:24.017399 systemd[1]: run-containerd-runc-k8s.io-c5d2922002f2a6888eb50a331ed3b7e13cd79bb7bd7475d1dcd8adf5ab3a8ae8-runc.qEBCxK.mount: Deactivated successfully. Jun 25 14:19:24.137000 audit[5731]: NETFILTER_CFG table=filter:128 family=2 entries=8 op=nft_register_rule pid=5731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:19:24.137000 audit[5731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd140f9a0 a2=0 a3=1 items=0 ppid=3461 pid=5731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:24.137000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:19:24.141000 audit[5731]: NETFILTER_CFG table=nat:129 family=2 entries=58 op=nft_register_chain pid=5731 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jun 25 14:19:24.141000 audit[5731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20452 a0=3 a1=ffffd140f9a0 a2=0 a3=1 items=0 ppid=3461 pid=5731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:24.141000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jun 25 14:19:27.058559 systemd[1]: Started sshd@24-172.31.29.41:22-139.178.68.195:32804.service - OpenSSH per-connection server daemon (139.178.68.195:32804). Jun 25 14:19:27.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.41:22-139.178.68.195:32804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:27.061282 kernel: kauditd_printk_skb: 13 callbacks suppressed Jun 25 14:19:27.061390 kernel: audit: type=1130 audit(1719325167.058:478): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.41:22-139.178.68.195:32804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:27.233000 audit[5734]: USER_ACCT pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:27.234755 sshd[5734]: Accepted publickey for core from 139.178.68.195 port 32804 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:19:27.238640 kernel: audit: type=1101 audit(1719325167.233:479): pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:27.239000 audit[5734]: CRED_ACQ pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:27.241595 sshd[5734]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:19:27.247913 kernel: audit: type=1103 audit(1719325167.239:480): pid=5734 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:27.248087 kernel: audit: type=1006 audit(1719325167.239:481): pid=5734 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=25 res=1 Jun 25 14:19:27.248135 kernel: audit: type=1300 audit(1719325167.239:481): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc83a2290 a2=3 a3=1 items=0 ppid=1 pid=5734 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:27.239000 audit[5734]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc83a2290 a2=3 a3=1 items=0 ppid=1 pid=5734 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=25 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:27.239000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:27.254601 kernel: audit: type=1327 audit(1719325167.239:481): proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:27.259517 systemd-logind[1895]: New session 25 of user core. Jun 25 14:19:27.265315 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 25 14:19:27.276000 audit[5734]: USER_START pid=5734 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:27.279000 audit[5737]: CRED_ACQ pid=5737 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:27.286362 kernel: audit: type=1105 audit(1719325167.276:482): pid=5734 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:27.286440 kernel: audit: type=1103 audit(1719325167.279:483): pid=5737 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:27.524036 sshd[5734]: pam_unix(sshd:session): session closed for user core Jun 25 14:19:27.526000 audit[5734]: USER_END pid=5734 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:27.529798 systemd[1]: sshd@24-172.31.29.41:22-139.178.68.195:32804.service: Deactivated successfully. Jun 25 14:19:27.531957 systemd[1]: session-25.scope: Deactivated successfully. Jun 25 14:19:27.526000 audit[5734]: CRED_DISP pid=5734 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:27.537532 kernel: audit: type=1106 audit(1719325167.526:484): pid=5734 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:27.537702 kernel: audit: type=1104 audit(1719325167.526:485): pid=5734 uid=0 auid=500 ses=25 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:27.537890 systemd-logind[1895]: Session 25 logged out. Waiting for processes to exit. Jun 25 14:19:27.526000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-172.31.29.41:22-139.178.68.195:32804 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:27.540175 systemd-logind[1895]: Removed session 25. Jun 25 14:19:32.556407 systemd[1]: Started sshd@25-172.31.29.41:22-139.178.68.195:55386.service - OpenSSH per-connection server daemon (139.178.68.195:55386). Jun 25 14:19:32.563499 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:19:32.563681 kernel: audit: type=1130 audit(1719325172.556:487): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.41:22-139.178.68.195:55386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:32.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.41:22-139.178.68.195:55386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:32.731000 audit[5752]: USER_ACCT pid=5752 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:32.732201 sshd[5752]: Accepted publickey for core from 139.178.68.195 port 55386 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:19:32.736739 kernel: audit: type=1101 audit(1719325172.731:488): pid=5752 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:32.738000 audit[5752]: CRED_ACQ pid=5752 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:32.740070 sshd[5752]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:19:32.746422 kernel: audit: type=1103 audit(1719325172.738:489): pid=5752 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:32.746585 kernel: audit: type=1006 audit(1719325172.738:490): pid=5752 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=26 res=1 Jun 25 14:19:32.746740 kernel: audit: type=1300 audit(1719325172.738:490): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1dcbd40 a2=3 a3=1 items=0 ppid=1 pid=5752 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:32.738000 audit[5752]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd1dcbd40 a2=3 a3=1 items=0 ppid=1 pid=5752 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=26 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:32.738000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:32.753474 kernel: audit: type=1327 audit(1719325172.738:490): proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:32.758463 systemd-logind[1895]: New session 26 of user core. Jun 25 14:19:32.765434 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 25 14:19:32.777000 audit[5752]: USER_START pid=5752 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:32.784671 kernel: audit: type=1105 audit(1719325172.777:491): pid=5752 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:32.784000 audit[5755]: CRED_ACQ pid=5755 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:32.789682 kernel: audit: type=1103 audit(1719325172.784:492): pid=5755 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:33.028309 sshd[5752]: pam_unix(sshd:session): session closed for user core Jun 25 14:19:33.029000 audit[5752]: USER_END pid=5752 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:33.036605 systemd[1]: sshd@25-172.31.29.41:22-139.178.68.195:55386.service: Deactivated successfully. Jun 25 14:19:33.040137 systemd[1]: session-26.scope: Deactivated successfully. Jun 25 14:19:33.041240 systemd-logind[1895]: Session 26 logged out. Waiting for processes to exit. Jun 25 14:19:33.031000 audit[5752]: CRED_DISP pid=5752 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:33.047540 kernel: audit: type=1106 audit(1719325173.029:493): pid=5752 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:33.047688 kernel: audit: type=1104 audit(1719325173.031:494): pid=5752 uid=0 auid=500 ses=26 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:33.045626 systemd-logind[1895]: Removed session 26. Jun 25 14:19:33.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-172.31.29.41:22-139.178.68.195:55386 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:36.912894 containerd[1911]: time="2024-06-25T14:19:36.912829869Z" level=info msg="StopPodSandbox for \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\"" Jun 25 14:19:37.050562 containerd[1911]: 2024-06-25 14:19:36.983 [WARNING][5800] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"81a99eed-b323-4056-acb1-e2466297b4af", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f", Pod:"coredns-5dd5756b68-rcct9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd480aab9c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:19:37.050562 containerd[1911]: 2024-06-25 14:19:36.984 [INFO][5800] k8s.go 608: Cleaning up netns ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:19:37.050562 containerd[1911]: 2024-06-25 14:19:36.984 [INFO][5800] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" iface="eth0" netns="" Jun 25 14:19:37.050562 containerd[1911]: 2024-06-25 14:19:36.984 [INFO][5800] k8s.go 615: Releasing IP address(es) ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:19:37.050562 containerd[1911]: 2024-06-25 14:19:36.984 [INFO][5800] utils.go 188: Calico CNI releasing IP address ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:19:37.050562 containerd[1911]: 2024-06-25 14:19:37.029 [INFO][5806] ipam_plugin.go 411: Releasing address using handleID ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" HandleID="k8s-pod-network.b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:19:37.050562 containerd[1911]: 2024-06-25 14:19:37.029 [INFO][5806] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:19:37.050562 containerd[1911]: 2024-06-25 14:19:37.029 [INFO][5806] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:19:37.050562 containerd[1911]: 2024-06-25 14:19:37.042 [WARNING][5806] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" HandleID="k8s-pod-network.b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:19:37.050562 containerd[1911]: 2024-06-25 14:19:37.042 [INFO][5806] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" HandleID="k8s-pod-network.b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:19:37.050562 containerd[1911]: 2024-06-25 14:19:37.045 [INFO][5806] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:19:37.050562 containerd[1911]: 2024-06-25 14:19:37.047 [INFO][5800] k8s.go 621: Teardown processing complete. ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:19:37.052076 containerd[1911]: time="2024-06-25T14:19:37.050658514Z" level=info msg="TearDown network for sandbox \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\" successfully" Jun 25 14:19:37.052076 containerd[1911]: time="2024-06-25T14:19:37.050710798Z" level=info msg="StopPodSandbox for \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\" returns successfully" Jun 25 14:19:37.052076 containerd[1911]: time="2024-06-25T14:19:37.051379378Z" level=info msg="RemovePodSandbox for \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\"" Jun 25 14:19:37.052076 containerd[1911]: time="2024-06-25T14:19:37.051450454Z" level=info msg="Forcibly stopping sandbox \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\"" Jun 25 14:19:37.208316 containerd[1911]: 2024-06-25 14:19:37.134 [WARNING][5824] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"81a99eed-b323-4056-acb1-e2466297b4af", ResourceVersion:"886", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"576b048239c31ff51145775ae30897ce93df886cd104ec95f2de4af63d54e82f", Pod:"coredns-5dd5756b68-rcct9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibd480aab9c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:19:37.208316 containerd[1911]: 2024-06-25 14:19:37.135 [INFO][5824] k8s.go 608: Cleaning up netns ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:19:37.208316 containerd[1911]: 2024-06-25 14:19:37.135 [INFO][5824] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" iface="eth0" netns="" Jun 25 14:19:37.208316 containerd[1911]: 2024-06-25 14:19:37.135 [INFO][5824] k8s.go 615: Releasing IP address(es) ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:19:37.208316 containerd[1911]: 2024-06-25 14:19:37.135 [INFO][5824] utils.go 188: Calico CNI releasing IP address ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:19:37.208316 containerd[1911]: 2024-06-25 14:19:37.180 [INFO][5831] ipam_plugin.go 411: Releasing address using handleID ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" HandleID="k8s-pod-network.b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:19:37.208316 containerd[1911]: 2024-06-25 14:19:37.180 [INFO][5831] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:19:37.208316 containerd[1911]: 2024-06-25 14:19:37.181 [INFO][5831] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:19:37.208316 containerd[1911]: 2024-06-25 14:19:37.195 [WARNING][5831] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" HandleID="k8s-pod-network.b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:19:37.208316 containerd[1911]: 2024-06-25 14:19:37.195 [INFO][5831] ipam_plugin.go 439: Releasing address using workloadID ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" HandleID="k8s-pod-network.b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--rcct9-eth0" Jun 25 14:19:37.208316 containerd[1911]: 2024-06-25 14:19:37.198 [INFO][5831] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:19:37.208316 containerd[1911]: 2024-06-25 14:19:37.202 [INFO][5824] k8s.go 621: Teardown processing complete. ContainerID="b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1" Jun 25 14:19:37.210138 containerd[1911]: time="2024-06-25T14:19:37.210058342Z" level=info msg="TearDown network for sandbox \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\" successfully" Jun 25 14:19:37.219001 containerd[1911]: time="2024-06-25T14:19:37.218941956Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:19:37.219313 containerd[1911]: time="2024-06-25T14:19:37.219274056Z" level=info msg="RemovePodSandbox \"b95c3f3e8d6692c1b3989401e8ed20019a0963d7764d3671f13b485be6b9a6a1\" returns successfully" Jun 25 14:19:37.220253 containerd[1911]: time="2024-06-25T14:19:37.220184363Z" level=info msg="StopPodSandbox for \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\"" Jun 25 14:19:37.349194 containerd[1911]: 2024-06-25 14:19:37.285 [WARNING][5850] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"465b00ac-d2f9-4d4f-8724-a625ed37de19", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c", Pod:"coredns-5dd5756b68-47ngv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cbb04f17a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:19:37.349194 containerd[1911]: 2024-06-25 14:19:37.285 [INFO][5850] k8s.go 608: Cleaning up netns ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:19:37.349194 containerd[1911]: 2024-06-25 14:19:37.286 [INFO][5850] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" iface="eth0" netns="" Jun 25 14:19:37.349194 containerd[1911]: 2024-06-25 14:19:37.286 [INFO][5850] k8s.go 615: Releasing IP address(es) ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:19:37.349194 containerd[1911]: 2024-06-25 14:19:37.286 [INFO][5850] utils.go 188: Calico CNI releasing IP address ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:19:37.349194 containerd[1911]: 2024-06-25 14:19:37.328 [INFO][5856] ipam_plugin.go 411: Releasing address using handleID ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" HandleID="k8s-pod-network.d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:19:37.349194 containerd[1911]: 2024-06-25 14:19:37.328 [INFO][5856] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:19:37.349194 containerd[1911]: 2024-06-25 14:19:37.328 [INFO][5856] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:19:37.349194 containerd[1911]: 2024-06-25 14:19:37.341 [WARNING][5856] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" HandleID="k8s-pod-network.d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:19:37.349194 containerd[1911]: 2024-06-25 14:19:37.341 [INFO][5856] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" HandleID="k8s-pod-network.d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:19:37.349194 containerd[1911]: 2024-06-25 14:19:37.344 [INFO][5856] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:19:37.349194 containerd[1911]: 2024-06-25 14:19:37.346 [INFO][5850] k8s.go 621: Teardown processing complete. ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:19:37.350317 containerd[1911]: time="2024-06-25T14:19:37.349891974Z" level=info msg="TearDown network for sandbox \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\" successfully" Jun 25 14:19:37.350317 containerd[1911]: time="2024-06-25T14:19:37.349959725Z" level=info msg="StopPodSandbox for \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\" returns successfully" Jun 25 14:19:37.350971 containerd[1911]: time="2024-06-25T14:19:37.350923084Z" level=info msg="RemovePodSandbox for \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\"" Jun 25 14:19:37.351096 containerd[1911]: time="2024-06-25T14:19:37.350982124Z" level=info msg="Forcibly stopping sandbox \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\"" Jun 25 14:19:37.498070 containerd[1911]: 2024-06-25 14:19:37.431 [WARNING][5874] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"465b00ac-d2f9-4d4f-8724-a625ed37de19", ResourceVersion:"891", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 17, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"125a659268bd1c5b881e79edecc8d99e7702e12370a59893d693d582e54d050c", Pod:"coredns-5dd5756b68-47ngv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1cbb04f17a5", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:19:37.498070 containerd[1911]: 2024-06-25 14:19:37.432 [INFO][5874] k8s.go 608: Cleaning up netns ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:19:37.498070 containerd[1911]: 2024-06-25 14:19:37.432 [INFO][5874] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" iface="eth0" netns="" Jun 25 14:19:37.498070 containerd[1911]: 2024-06-25 14:19:37.432 [INFO][5874] k8s.go 615: Releasing IP address(es) ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:19:37.498070 containerd[1911]: 2024-06-25 14:19:37.432 [INFO][5874] utils.go 188: Calico CNI releasing IP address ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:19:37.498070 containerd[1911]: 2024-06-25 14:19:37.475 [INFO][5880] ipam_plugin.go 411: Releasing address using handleID ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" HandleID="k8s-pod-network.d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:19:37.498070 containerd[1911]: 2024-06-25 14:19:37.476 [INFO][5880] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:19:37.498070 containerd[1911]: 2024-06-25 14:19:37.476 [INFO][5880] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:19:37.498070 containerd[1911]: 2024-06-25 14:19:37.488 [WARNING][5880] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" HandleID="k8s-pod-network.d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:19:37.498070 containerd[1911]: 2024-06-25 14:19:37.488 [INFO][5880] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" HandleID="k8s-pod-network.d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Workload="ip--172--31--29--41-k8s-coredns--5dd5756b68--47ngv-eth0" Jun 25 14:19:37.498070 containerd[1911]: 2024-06-25 14:19:37.491 [INFO][5880] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:19:37.498070 containerd[1911]: 2024-06-25 14:19:37.493 [INFO][5874] k8s.go 621: Teardown processing complete. ContainerID="d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f" Jun 25 14:19:37.498070 containerd[1911]: time="2024-06-25T14:19:37.497427809Z" level=info msg="TearDown network for sandbox \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\" successfully" Jun 25 14:19:37.502336 containerd[1911]: time="2024-06-25T14:19:37.502221336Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:19:37.502494 containerd[1911]: time="2024-06-25T14:19:37.502417704Z" level=info msg="RemovePodSandbox \"d5d859eda874b52851f11ede90a5768a7f810606a440edaad5cc02c384bb534f\" returns successfully" Jun 25 14:19:37.503346 containerd[1911]: time="2024-06-25T14:19:37.503301755Z" level=info msg="StopPodSandbox for \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\"" Jun 25 14:19:37.632951 containerd[1911]: 2024-06-25 14:19:37.568 [WARNING][5898] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc7acd19-00be-407a-b3d7-2b1d30780fb3", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2", Pod:"csi-node-driver-s85fn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali392401c060f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:19:37.632951 containerd[1911]: 2024-06-25 14:19:37.569 [INFO][5898] k8s.go 608: Cleaning up netns ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:19:37.632951 containerd[1911]: 2024-06-25 14:19:37.569 [INFO][5898] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" iface="eth0" netns="" Jun 25 14:19:37.632951 containerd[1911]: 2024-06-25 14:19:37.570 [INFO][5898] k8s.go 615: Releasing IP address(es) ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:19:37.632951 containerd[1911]: 2024-06-25 14:19:37.570 [INFO][5898] utils.go 188: Calico CNI releasing IP address ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:19:37.632951 containerd[1911]: 2024-06-25 14:19:37.610 [INFO][5904] ipam_plugin.go 411: Releasing address using handleID ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" HandleID="k8s-pod-network.3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Workload="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:19:37.632951 containerd[1911]: 2024-06-25 14:19:37.610 [INFO][5904] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:19:37.632951 containerd[1911]: 2024-06-25 14:19:37.610 [INFO][5904] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:19:37.632951 containerd[1911]: 2024-06-25 14:19:37.624 [WARNING][5904] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" HandleID="k8s-pod-network.3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Workload="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:19:37.632951 containerd[1911]: 2024-06-25 14:19:37.624 [INFO][5904] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" HandleID="k8s-pod-network.3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Workload="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:19:37.632951 containerd[1911]: 2024-06-25 14:19:37.627 [INFO][5904] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:19:37.632951 containerd[1911]: 2024-06-25 14:19:37.630 [INFO][5898] k8s.go 621: Teardown processing complete. ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:19:37.633997 containerd[1911]: time="2024-06-25T14:19:37.633019758Z" level=info msg="TearDown network for sandbox \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\" successfully" Jun 25 14:19:37.633997 containerd[1911]: time="2024-06-25T14:19:37.633076446Z" level=info msg="StopPodSandbox for \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\" returns successfully" Jun 25 14:19:37.634414 containerd[1911]: time="2024-06-25T14:19:37.634363228Z" level=info msg="RemovePodSandbox for \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\"" Jun 25 14:19:37.634631 containerd[1911]: time="2024-06-25T14:19:37.634549804Z" level=info msg="Forcibly stopping sandbox \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\"" Jun 25 14:19:37.804396 containerd[1911]: 2024-06-25 14:19:37.704 [WARNING][5922] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"cc7acd19-00be-407a-b3d7-2b1d30780fb3", ResourceVersion:"932", Generation:0, CreationTimestamp:time.Date(2024, time.June, 25, 14, 18, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-29-41", ContainerID:"485054efa8b9c182c024c1006ed165176214c73b56ab5857445412553b82d6f2", Pod:"csi-node-driver-s85fn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.115.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali392401c060f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jun 25 14:19:37.804396 containerd[1911]: 2024-06-25 14:19:37.705 [INFO][5922] k8s.go 608: Cleaning up netns ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:19:37.804396 containerd[1911]: 2024-06-25 14:19:37.705 [INFO][5922] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" iface="eth0" netns="" Jun 25 14:19:37.804396 containerd[1911]: 2024-06-25 14:19:37.705 [INFO][5922] k8s.go 615: Releasing IP address(es) ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:19:37.804396 containerd[1911]: 2024-06-25 14:19:37.705 [INFO][5922] utils.go 188: Calico CNI releasing IP address ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:19:37.804396 containerd[1911]: 2024-06-25 14:19:37.781 [INFO][5929] ipam_plugin.go 411: Releasing address using handleID ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" HandleID="k8s-pod-network.3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Workload="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:19:37.804396 containerd[1911]: 2024-06-25 14:19:37.781 [INFO][5929] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Jun 25 14:19:37.804396 containerd[1911]: 2024-06-25 14:19:37.782 [INFO][5929] ipam_plugin.go 367: Acquired host-wide IPAM lock. Jun 25 14:19:37.804396 containerd[1911]: 2024-06-25 14:19:37.794 [WARNING][5929] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" HandleID="k8s-pod-network.3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Workload="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:19:37.804396 containerd[1911]: 2024-06-25 14:19:37.794 [INFO][5929] ipam_plugin.go 439: Releasing address using workloadID ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" HandleID="k8s-pod-network.3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Workload="ip--172--31--29--41-k8s-csi--node--driver--s85fn-eth0" Jun 25 14:19:37.804396 containerd[1911]: 2024-06-25 14:19:37.796 [INFO][5929] ipam_plugin.go 373: Released host-wide IPAM lock. Jun 25 14:19:37.804396 containerd[1911]: 2024-06-25 14:19:37.800 [INFO][5922] k8s.go 621: Teardown processing complete. ContainerID="3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3" Jun 25 14:19:37.804396 containerd[1911]: time="2024-06-25T14:19:37.802806042Z" level=info msg="TearDown network for sandbox \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\" successfully" Jun 25 14:19:37.808467 containerd[1911]: time="2024-06-25T14:19:37.808362204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jun 25 14:19:37.808690 containerd[1911]: time="2024-06-25T14:19:37.808574580Z" level=info msg="RemovePodSandbox \"3e95d397d3f19be0b1f89afeb23d150b99b082a1e790fdc99ff613b916d93bc3\" returns successfully" Jun 25 14:19:38.065525 systemd[1]: Started sshd@26-172.31.29.41:22-139.178.68.195:43174.service - OpenSSH per-connection server daemon (139.178.68.195:43174). Jun 25 14:19:38.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.41:22-139.178.68.195:43174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:38.068828 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:19:38.068972 kernel: audit: type=1130 audit(1719325178.067:496): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.41:22-139.178.68.195:43174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:38.250000 audit[5935]: USER_ACCT pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:38.254017 sshd[5935]: Accepted publickey for core from 139.178.68.195 port 43174 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:19:38.255053 sshd[5935]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:19:38.252000 audit[5935]: CRED_ACQ pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:38.260399 kernel: audit: type=1101 audit(1719325178.250:497): pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:38.260578 kernel: audit: type=1103 audit(1719325178.252:498): pid=5935 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:38.264099 kernel: audit: type=1006 audit(1719325178.252:499): pid=5935 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=27 res=1 Jun 25 14:19:38.269660 kernel: audit: type=1300 audit(1719325178.252:499): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd3de8b80 a2=3 a3=1 items=0 ppid=1 pid=5935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:38.252000 audit[5935]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd3de8b80 a2=3 a3=1 items=0 ppid=1 pid=5935 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=27 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:38.252000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:38.272002 kernel: audit: type=1327 audit(1719325178.252:499): proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:38.275594 systemd-logind[1895]: New session 27 of user core. Jun 25 14:19:38.282197 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 25 14:19:38.293000 audit[5935]: USER_START pid=5935 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:38.299691 kernel: audit: type=1105 audit(1719325178.293:500): pid=5935 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:38.300000 audit[5939]: CRED_ACQ pid=5939 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:38.307704 kernel: audit: type=1103 audit(1719325178.300:501): pid=5939 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:38.545929 sshd[5935]: pam_unix(sshd:session): session closed for user core Jun 25 14:19:38.548000 audit[5935]: USER_END pid=5935 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:38.553479 systemd[1]: sshd@26-172.31.29.41:22-139.178.68.195:43174.service: Deactivated successfully. Jun 25 14:19:38.555165 systemd[1]: session-27.scope: Deactivated successfully. Jun 25 14:19:38.548000 audit[5935]: CRED_DISP pid=5935 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:38.560706 kernel: audit: type=1106 audit(1719325178.548:502): pid=5935 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:38.560837 kernel: audit: type=1104 audit(1719325178.548:503): pid=5935 uid=0 auid=500 ses=27 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:38.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@26-172.31.29.41:22-139.178.68.195:43174 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:38.561603 systemd-logind[1895]: Session 27 logged out. Waiting for processes to exit. Jun 25 14:19:38.563220 systemd-logind[1895]: Removed session 27. Jun 25 14:19:43.576526 systemd[1]: Started sshd@27-172.31.29.41:22-139.178.68.195:43176.service - OpenSSH per-connection server daemon (139.178.68.195:43176). Jun 25 14:19:43.582473 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:19:43.582557 kernel: audit: type=1130 audit(1719325183.576:505): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.41:22-139.178.68.195:43176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:43.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.41:22-139.178.68.195:43176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:43.755000 audit[5976]: USER_ACCT pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:43.756411 sshd[5976]: Accepted publickey for core from 139.178.68.195 port 43176 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:19:43.761716 kernel: audit: type=1101 audit(1719325183.755:506): pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:43.762000 audit[5976]: CRED_ACQ pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:43.764125 sshd[5976]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:19:43.771374 kernel: audit: type=1103 audit(1719325183.762:507): pid=5976 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:43.771512 kernel: audit: type=1006 audit(1719325183.762:508): pid=5976 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=28 res=1 Jun 25 14:19:43.762000 audit[5976]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5254e00 a2=3 a3=1 items=0 ppid=1 pid=5976 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:43.778450 kernel: audit: type=1300 audit(1719325183.762:508): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff5254e00 a2=3 a3=1 items=0 ppid=1 pid=5976 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=28 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:43.762000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:43.780735 kernel: audit: type=1327 audit(1719325183.762:508): proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:43.785539 systemd-logind[1895]: New session 28 of user core. Jun 25 14:19:43.790144 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 25 14:19:43.804000 audit[5976]: USER_START pid=5976 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:43.811696 kernel: audit: type=1105 audit(1719325183.804:509): pid=5976 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:43.811000 audit[5980]: CRED_ACQ pid=5980 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:43.816784 kernel: audit: type=1103 audit(1719325183.811:510): pid=5980 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:44.071114 sshd[5976]: pam_unix(sshd:session): session closed for user core Jun 25 14:19:44.072000 audit[5976]: USER_END pid=5976 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:44.075000 audit[5976]: CRED_DISP pid=5976 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:44.078643 systemd[1]: sshd@27-172.31.29.41:22-139.178.68.195:43176.service: Deactivated successfully. Jun 25 14:19:44.083219 kernel: audit: type=1106 audit(1719325184.072:511): pid=5976 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:44.083340 kernel: audit: type=1104 audit(1719325184.075:512): pid=5976 uid=0 auid=500 ses=28 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:44.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@27-172.31.29.41:22-139.178.68.195:43176 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:44.084296 systemd[1]: session-28.scope: Deactivated successfully. Jun 25 14:19:44.085134 systemd-logind[1895]: Session 28 logged out. Waiting for processes to exit. Jun 25 14:19:44.087347 systemd-logind[1895]: Removed session 28. Jun 25 14:19:49.103668 kernel: kauditd_printk_skb: 1 callbacks suppressed Jun 25 14:19:49.103841 kernel: audit: type=1130 audit(1719325189.102:514): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-172.31.29.41:22-139.178.68.195:59314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:49.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-172.31.29.41:22-139.178.68.195:59314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:49.102323 systemd[1]: Started sshd@28-172.31.29.41:22-139.178.68.195:59314.service - OpenSSH per-connection server daemon (139.178.68.195:59314). Jun 25 14:19:49.278000 audit[5990]: USER_ACCT pid=5990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:49.279349 sshd[5990]: Accepted publickey for core from 139.178.68.195 port 59314 ssh2: RSA SHA256:t7Am3wobCVUQdBRxpgYDtUWxKGU60mVjJuotmrvKHg4 Jun 25 14:19:49.284712 kernel: audit: type=1101 audit(1719325189.278:515): pid=5990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:49.285000 audit[5990]: CRED_ACQ pid=5990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:49.286889 sshd[5990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jun 25 14:19:49.294425 kernel: audit: type=1103 audit(1719325189.285:516): pid=5990 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:49.294543 kernel: audit: type=1006 audit(1719325189.285:517): pid=5990 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=29 res=1 Jun 25 14:19:49.300462 kernel: audit: type=1300 audit(1719325189.285:517): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdc680830 a2=3 a3=1 items=0 ppid=1 pid=5990 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:49.285000 audit[5990]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdc680830 a2=3 a3=1 items=0 ppid=1 pid=5990 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=29 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Jun 25 14:19:49.285000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:49.303357 kernel: audit: type=1327 audit(1719325189.285:517): proctitle=737368643A20636F7265205B707269765D Jun 25 14:19:49.303984 systemd-logind[1895]: New session 29 of user core. Jun 25 14:19:49.309377 systemd[1]: Started session-29.scope - Session 29 of User core. Jun 25 14:19:49.320000 audit[5990]: USER_START pid=5990 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:49.320000 audit[5998]: CRED_ACQ pid=5998 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:49.330854 kernel: audit: type=1105 audit(1719325189.320:518): pid=5990 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:49.330978 kernel: audit: type=1103 audit(1719325189.320:519): pid=5998 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:49.569547 sshd[5990]: pam_unix(sshd:session): session closed for user core Jun 25 14:19:49.571000 audit[5990]: USER_END pid=5990 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:49.575682 systemd[1]: sshd@28-172.31.29.41:22-139.178.68.195:59314.service: Deactivated successfully. Jun 25 14:19:49.577171 systemd[1]: session-29.scope: Deactivated successfully. Jun 25 14:19:49.572000 audit[5990]: CRED_DISP pid=5990 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:49.583018 kernel: audit: type=1106 audit(1719325189.571:520): pid=5990 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:49.583136 kernel: audit: type=1104 audit(1719325189.572:521): pid=5990 uid=0 auid=500 ses=29 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=139.178.68.195 addr=139.178.68.195 terminal=ssh res=success' Jun 25 14:19:49.583851 systemd-logind[1895]: Session 29 logged out. Waiting for processes to exit. Jun 25 14:19:49.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@28-172.31.29.41:22-139.178.68.195:59314 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jun 25 14:19:49.591697 systemd-logind[1895]: Removed session 29.