Mar 17 18:30:35.763332 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Mar 17 18:30:35.763351 kernel: Linux version 5.15.179-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Mar 17 17:11:44 -00 2025 Mar 17 18:30:35.763358 kernel: efi: EFI v2.70 by EDK II Mar 17 18:30:35.763364 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Mar 17 18:30:35.763369 kernel: random: crng init done Mar 17 18:30:35.763375 kernel: ACPI: Early table checksum verification disabled Mar 17 18:30:35.763381 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Mar 17 18:30:35.763388 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Mar 17 18:30:35.763394 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:30:35.763399 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:30:35.763405 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:30:35.763442 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:30:35.763448 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:30:35.763454 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:30:35.763462 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:30:35.763468 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:30:35.763474 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 18:30:35.763479 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Mar 17 18:30:35.763485 kernel: NUMA: Failed to initialise from firmware Mar 17 18:30:35.763491 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:30:35.763496 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Mar 17 18:30:35.763502 kernel: Zone ranges: Mar 17 18:30:35.763508 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:30:35.763514 kernel: DMA32 empty Mar 17 18:30:35.763520 kernel: Normal empty Mar 17 18:30:35.763526 kernel: Movable zone start for each node Mar 17 18:30:35.763537 kernel: Early memory node ranges Mar 17 18:30:35.763543 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Mar 17 18:30:35.763549 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Mar 17 18:30:35.763555 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Mar 17 18:30:35.763560 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Mar 17 18:30:35.763566 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Mar 17 18:30:35.763572 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Mar 17 18:30:35.763577 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Mar 17 18:30:35.763583 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Mar 17 18:30:35.763590 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Mar 17 18:30:35.763596 kernel: psci: probing for conduit method from ACPI. Mar 17 18:30:35.763601 kernel: psci: PSCIv1.1 detected in firmware. Mar 17 18:30:35.763607 kernel: psci: Using standard PSCI v0.2 function IDs Mar 17 18:30:35.763613 kernel: psci: Trusted OS migration not required Mar 17 18:30:35.763621 kernel: psci: SMC Calling Convention v1.1 Mar 17 18:30:35.763627 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Mar 17 18:30:35.763634 kernel: ACPI: SRAT not present Mar 17 18:30:35.763641 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Mar 17 18:30:35.763647 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Mar 17 18:30:35.763653 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Mar 17 18:30:35.763659 kernel: Detected PIPT I-cache on CPU0 Mar 17 18:30:35.763665 kernel: CPU features: detected: GIC system register CPU interface Mar 17 18:30:35.763671 kernel: CPU features: detected: Hardware dirty bit management Mar 17 18:30:35.763678 kernel: CPU features: detected: Spectre-v4 Mar 17 18:30:35.763684 kernel: CPU features: detected: Spectre-BHB Mar 17 18:30:35.763691 kernel: CPU features: kernel page table isolation forced ON by KASLR Mar 17 18:30:35.763698 kernel: CPU features: detected: Kernel page table isolation (KPTI) Mar 17 18:30:35.763704 kernel: CPU features: detected: ARM erratum 1418040 Mar 17 18:30:35.763710 kernel: CPU features: detected: SSBS not fully self-synchronizing Mar 17 18:30:35.763716 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Mar 17 18:30:35.763722 kernel: Policy zone: DMA Mar 17 18:30:35.763729 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:30:35.763735 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 18:30:35.763742 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 17 18:30:35.763748 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 18:30:35.763754 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 18:30:35.763761 kernel: Memory: 2457404K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114884K reserved, 0K cma-reserved) Mar 17 18:30:35.763768 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 17 18:30:35.763774 kernel: trace event string verifier disabled Mar 17 18:30:35.763780 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 18:30:35.763786 kernel: rcu: RCU event tracing is enabled. Mar 17 18:30:35.763792 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 17 18:30:35.763798 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 18:30:35.763805 kernel: Tracing variant of Tasks RCU enabled. Mar 17 18:30:35.763811 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 18:30:35.763817 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 17 18:30:35.763823 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Mar 17 18:30:35.763830 kernel: GICv3: 256 SPIs implemented Mar 17 18:30:35.763836 kernel: GICv3: 0 Extended SPIs implemented Mar 17 18:30:35.763842 kernel: GICv3: Distributor has no Range Selector support Mar 17 18:30:35.763848 kernel: Root IRQ handler: gic_handle_irq Mar 17 18:30:35.763854 kernel: GICv3: 16 PPIs implemented Mar 17 18:30:35.763859 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Mar 17 18:30:35.763865 kernel: ACPI: SRAT not present Mar 17 18:30:35.763871 kernel: ITS [mem 0x08080000-0x0809ffff] Mar 17 18:30:35.763878 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Mar 17 18:30:35.763884 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Mar 17 18:30:35.763890 kernel: GICv3: using LPI property table @0x00000000400d0000 Mar 17 18:30:35.763896 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Mar 17 18:30:35.763904 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:30:35.763910 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Mar 17 18:30:35.763916 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Mar 17 18:30:35.763922 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Mar 17 18:30:35.763928 kernel: arm-pv: using stolen time PV Mar 17 18:30:35.763934 kernel: Console: colour dummy device 80x25 Mar 17 18:30:35.763941 kernel: ACPI: Core revision 20210730 Mar 17 18:30:35.763947 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Mar 17 18:30:35.763953 kernel: pid_max: default: 32768 minimum: 301 Mar 17 18:30:35.763959 kernel: LSM: Security Framework initializing Mar 17 18:30:35.763967 kernel: SELinux: Initializing. Mar 17 18:30:35.763973 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:30:35.763979 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 17 18:30:35.763986 kernel: rcu: Hierarchical SRCU implementation. Mar 17 18:30:35.763992 kernel: Platform MSI: ITS@0x8080000 domain created Mar 17 18:30:35.763998 kernel: PCI/MSI: ITS@0x8080000 domain created Mar 17 18:30:35.764004 kernel: Remapping and enabling EFI services. Mar 17 18:30:35.764010 kernel: smp: Bringing up secondary CPUs ... Mar 17 18:30:35.764017 kernel: Detected PIPT I-cache on CPU1 Mar 17 18:30:35.764024 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Mar 17 18:30:35.764030 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Mar 17 18:30:35.764037 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:30:35.764043 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Mar 17 18:30:35.764049 kernel: Detected PIPT I-cache on CPU2 Mar 17 18:30:35.764056 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Mar 17 18:30:35.764062 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Mar 17 18:30:35.764068 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:30:35.764074 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Mar 17 18:30:35.764080 kernel: Detected PIPT I-cache on CPU3 Mar 17 18:30:35.764088 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Mar 17 18:30:35.764094 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Mar 17 18:30:35.764100 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Mar 17 18:30:35.764106 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Mar 17 18:30:35.764117 kernel: smp: Brought up 1 node, 4 CPUs Mar 17 18:30:35.764125 kernel: SMP: Total of 4 processors activated. Mar 17 18:30:35.764131 kernel: CPU features: detected: 32-bit EL0 Support Mar 17 18:30:35.764138 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Mar 17 18:30:35.764145 kernel: CPU features: detected: Common not Private translations Mar 17 18:30:35.764151 kernel: CPU features: detected: CRC32 instructions Mar 17 18:30:35.764157 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Mar 17 18:30:35.764164 kernel: CPU features: detected: LSE atomic instructions Mar 17 18:30:35.764171 kernel: CPU features: detected: Privileged Access Never Mar 17 18:30:35.764178 kernel: CPU features: detected: RAS Extension Support Mar 17 18:30:35.764184 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Mar 17 18:30:35.764191 kernel: CPU: All CPU(s) started at EL1 Mar 17 18:30:35.764197 kernel: alternatives: patching kernel code Mar 17 18:30:35.764205 kernel: devtmpfs: initialized Mar 17 18:30:35.764211 kernel: KASLR enabled Mar 17 18:30:35.764218 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 18:30:35.764224 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 17 18:30:35.764231 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 18:30:35.764237 kernel: SMBIOS 3.0.0 present. Mar 17 18:30:35.764244 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Mar 17 18:30:35.764255 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 18:30:35.764262 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Mar 17 18:30:35.764270 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Mar 17 18:30:35.764277 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Mar 17 18:30:35.764284 kernel: audit: initializing netlink subsys (disabled) Mar 17 18:30:35.764290 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 Mar 17 18:30:35.764297 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 18:30:35.764303 kernel: cpuidle: using governor menu Mar 17 18:30:35.764310 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Mar 17 18:30:35.764316 kernel: ASID allocator initialised with 32768 entries Mar 17 18:30:35.764322 kernel: ACPI: bus type PCI registered Mar 17 18:30:35.764330 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 18:30:35.764336 kernel: Serial: AMBA PL011 UART driver Mar 17 18:30:35.764343 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Mar 17 18:30:35.764350 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Mar 17 18:30:35.764356 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 18:30:35.764363 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Mar 17 18:30:35.764370 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 18:30:35.764376 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Mar 17 18:30:35.764383 kernel: ACPI: Added _OSI(Module Device) Mar 17 18:30:35.764391 kernel: ACPI: Added _OSI(Processor Device) Mar 17 18:30:35.764397 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 18:30:35.764404 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 18:30:35.764415 kernel: ACPI: Added _OSI(Linux-Dell-Video) Mar 17 18:30:35.764422 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Mar 17 18:30:35.764428 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Mar 17 18:30:35.764435 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 18:30:35.764441 kernel: ACPI: Interpreter enabled Mar 17 18:30:35.764448 kernel: ACPI: Using GIC for interrupt routing Mar 17 18:30:35.764456 kernel: ACPI: MCFG table detected, 1 entries Mar 17 18:30:35.764462 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Mar 17 18:30:35.764469 kernel: printk: console [ttyAMA0] enabled Mar 17 18:30:35.764475 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 18:30:35.764611 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 17 18:30:35.764673 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Mar 17 18:30:35.764729 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Mar 17 18:30:35.764789 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Mar 17 18:30:35.764859 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Mar 17 18:30:35.764868 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Mar 17 18:30:35.764874 kernel: PCI host bridge to bus 0000:00 Mar 17 18:30:35.764936 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Mar 17 18:30:35.764986 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Mar 17 18:30:35.765038 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Mar 17 18:30:35.765089 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 18:30:35.765160 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Mar 17 18:30:35.765227 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Mar 17 18:30:35.765298 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Mar 17 18:30:35.765356 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Mar 17 18:30:35.765422 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 18:30:35.765481 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Mar 17 18:30:35.765540 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Mar 17 18:30:35.765598 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Mar 17 18:30:35.765649 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Mar 17 18:30:35.765701 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Mar 17 18:30:35.765751 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Mar 17 18:30:35.765759 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Mar 17 18:30:35.765766 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Mar 17 18:30:35.765772 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Mar 17 18:30:35.765780 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Mar 17 18:30:35.765787 kernel: iommu: Default domain type: Translated Mar 17 18:30:35.765794 kernel: iommu: DMA domain TLB invalidation policy: strict mode Mar 17 18:30:35.765800 kernel: vgaarb: loaded Mar 17 18:30:35.765807 kernel: pps_core: LinuxPPS API ver. 1 registered Mar 17 18:30:35.765813 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Mar 17 18:30:35.765820 kernel: PTP clock support registered Mar 17 18:30:35.765827 kernel: Registered efivars operations Mar 17 18:30:35.765833 kernel: clocksource: Switched to clocksource arch_sys_counter Mar 17 18:30:35.765841 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 18:30:35.765848 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 18:30:35.765855 kernel: pnp: PnP ACPI init Mar 17 18:30:35.765917 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Mar 17 18:30:35.765940 kernel: pnp: PnP ACPI: found 1 devices Mar 17 18:30:35.765946 kernel: NET: Registered PF_INET protocol family Mar 17 18:30:35.765953 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 17 18:30:35.765960 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 17 18:30:35.765968 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 18:30:35.765975 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 18:30:35.765982 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Mar 17 18:30:35.765988 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 17 18:30:35.765995 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:30:35.766001 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 17 18:30:35.766008 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 18:30:35.766014 kernel: PCI: CLS 0 bytes, default 64 Mar 17 18:30:35.766021 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Mar 17 18:30:35.766029 kernel: kvm [1]: HYP mode not available Mar 17 18:30:35.766038 kernel: Initialise system trusted keyrings Mar 17 18:30:35.766044 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 17 18:30:35.766051 kernel: Key type asymmetric registered Mar 17 18:30:35.766057 kernel: Asymmetric key parser 'x509' registered Mar 17 18:30:35.766064 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Mar 17 18:30:35.766070 kernel: io scheduler mq-deadline registered Mar 17 18:30:35.766077 kernel: io scheduler kyber registered Mar 17 18:30:35.766083 kernel: io scheduler bfq registered Mar 17 18:30:35.766091 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Mar 17 18:30:35.766097 kernel: ACPI: button: Power Button [PWRB] Mar 17 18:30:35.766104 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Mar 17 18:30:35.766169 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Mar 17 18:30:35.766178 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 18:30:35.766184 kernel: thunder_xcv, ver 1.0 Mar 17 18:30:35.766191 kernel: thunder_bgx, ver 1.0 Mar 17 18:30:35.766198 kernel: nicpf, ver 1.0 Mar 17 18:30:35.766204 kernel: nicvf, ver 1.0 Mar 17 18:30:35.766284 kernel: rtc-efi rtc-efi.0: registered as rtc0 Mar 17 18:30:35.766341 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-03-17T18:30:35 UTC (1742236235) Mar 17 18:30:35.766351 kernel: hid: raw HID events driver (C) Jiri Kosina Mar 17 18:30:35.766357 kernel: NET: Registered PF_INET6 protocol family Mar 17 18:30:35.766363 kernel: Segment Routing with IPv6 Mar 17 18:30:35.766370 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 18:30:35.766377 kernel: NET: Registered PF_PACKET protocol family Mar 17 18:30:35.766384 kernel: Key type dns_resolver registered Mar 17 18:30:35.766392 kernel: registered taskstats version 1 Mar 17 18:30:35.766399 kernel: Loading compiled-in X.509 certificates Mar 17 18:30:35.766405 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.179-flatcar: c6f3fb83dc6bb7052b07ec5b1ef41d12f9b3f7e4' Mar 17 18:30:35.766431 kernel: Key type .fscrypt registered Mar 17 18:30:35.766437 kernel: Key type fscrypt-provisioning registered Mar 17 18:30:35.766444 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 18:30:35.766450 kernel: ima: Allocated hash algorithm: sha1 Mar 17 18:30:35.766457 kernel: ima: No architecture policies found Mar 17 18:30:35.766463 kernel: clk: Disabling unused clocks Mar 17 18:30:35.766472 kernel: Freeing unused kernel memory: 36416K Mar 17 18:30:35.766478 kernel: Run /init as init process Mar 17 18:30:35.766484 kernel: with arguments: Mar 17 18:30:35.766491 kernel: /init Mar 17 18:30:35.766497 kernel: with environment: Mar 17 18:30:35.766503 kernel: HOME=/ Mar 17 18:30:35.766510 kernel: TERM=linux Mar 17 18:30:35.766516 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 18:30:35.766524 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:30:35.766534 systemd[1]: Detected virtualization kvm. Mar 17 18:30:35.766542 systemd[1]: Detected architecture arm64. Mar 17 18:30:35.766548 systemd[1]: Running in initrd. Mar 17 18:30:35.766555 systemd[1]: No hostname configured, using default hostname. Mar 17 18:30:35.766562 systemd[1]: Hostname set to . Mar 17 18:30:35.766569 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:30:35.766577 systemd[1]: Queued start job for default target initrd.target. Mar 17 18:30:35.766585 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:30:35.766591 systemd[1]: Reached target cryptsetup.target. Mar 17 18:30:35.766598 systemd[1]: Reached target paths.target. Mar 17 18:30:35.766605 systemd[1]: Reached target slices.target. Mar 17 18:30:35.766612 systemd[1]: Reached target swap.target. Mar 17 18:30:35.766618 systemd[1]: Reached target timers.target. Mar 17 18:30:35.766625 systemd[1]: Listening on iscsid.socket. Mar 17 18:30:35.766634 systemd[1]: Listening on iscsiuio.socket. Mar 17 18:30:35.766641 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:30:35.766648 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:30:35.766655 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:30:35.766662 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:30:35.766669 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:30:35.766676 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:30:35.766683 systemd[1]: Reached target sockets.target. Mar 17 18:30:35.766690 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:30:35.766698 systemd[1]: Finished network-cleanup.service. Mar 17 18:30:35.766705 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 18:30:35.766715 systemd[1]: Starting systemd-journald.service... Mar 17 18:30:35.766723 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:30:35.766729 systemd[1]: Starting systemd-resolved.service... Mar 17 18:30:35.766736 systemd[1]: Starting systemd-vconsole-setup.service... Mar 17 18:30:35.766743 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:30:35.766750 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 18:30:35.766757 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:30:35.766765 systemd[1]: Finished systemd-vconsole-setup.service. Mar 17 18:30:35.766773 kernel: audit: type=1130 audit(1742236235.763:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.766780 systemd[1]: Starting dracut-cmdline-ask.service... Mar 17 18:30:35.766790 systemd-journald[289]: Journal started Mar 17 18:30:35.766831 systemd-journald[289]: Runtime Journal (/run/log/journal/31d4b345d6c54c80b72571a197f1110c) is 6.0M, max 48.7M, 42.6M free. Mar 17 18:30:35.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.762687 systemd-modules-load[290]: Inserted module 'overlay' Mar 17 18:30:35.768889 systemd[1]: Started systemd-journald.service. Mar 17 18:30:35.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.769988 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:30:35.776361 kernel: audit: type=1130 audit(1742236235.769:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.776379 kernel: audit: type=1130 audit(1742236235.772:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.788451 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 18:30:35.788675 systemd-resolved[291]: Positive Trust Anchors: Mar 17 18:30:35.788693 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:30:35.788721 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:30:35.793355 systemd-resolved[291]: Defaulting to hostname 'linux'. Mar 17 18:30:35.801649 kernel: Bridge firewalling registered Mar 17 18:30:35.801668 kernel: audit: type=1130 audit(1742236235.797:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.794096 systemd[1]: Started systemd-resolved.service. Mar 17 18:30:35.797148 systemd[1]: Reached target nss-lookup.target. Mar 17 18:30:35.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.798351 systemd-modules-load[290]: Inserted module 'br_netfilter' Mar 17 18:30:35.808444 kernel: audit: type=1130 audit(1742236235.802:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.801804 systemd[1]: Finished dracut-cmdline-ask.service. Mar 17 18:30:35.803996 systemd[1]: Starting dracut-cmdline.service... Mar 17 18:30:35.811432 kernel: SCSI subsystem initialized Mar 17 18:30:35.814977 dracut-cmdline[307]: dracut-dracut-053 Mar 17 18:30:35.817033 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=e034db32d58fe7496a3db6ba3879dd9052cea2cf1597d65edfc7b26afc92530d Mar 17 18:30:35.822966 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 18:30:35.822985 kernel: device-mapper: uevent: version 1.0.3 Mar 17 18:30:35.822993 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Mar 17 18:30:35.826224 systemd-modules-load[290]: Inserted module 'dm_multipath' Mar 17 18:30:35.827027 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:30:35.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.828648 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:30:35.832476 kernel: audit: type=1130 audit(1742236235.827:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.837884 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:30:35.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.842791 kernel: audit: type=1130 audit(1742236235.838:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.878427 kernel: Loading iSCSI transport class v2.0-870. Mar 17 18:30:35.890422 kernel: iscsi: registered transport (tcp) Mar 17 18:30:35.905433 kernel: iscsi: registered transport (qla4xxx) Mar 17 18:30:35.905446 kernel: QLogic iSCSI HBA Driver Mar 17 18:30:35.937796 systemd[1]: Finished dracut-cmdline.service. Mar 17 18:30:35.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.939347 systemd[1]: Starting dracut-pre-udev.service... Mar 17 18:30:35.942847 kernel: audit: type=1130 audit(1742236235.938:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:35.981440 kernel: raid6: neonx8 gen() 13807 MB/s Mar 17 18:30:35.998427 kernel: raid6: neonx8 xor() 10822 MB/s Mar 17 18:30:36.015426 kernel: raid6: neonx4 gen() 13527 MB/s Mar 17 18:30:36.032429 kernel: raid6: neonx4 xor() 11258 MB/s Mar 17 18:30:36.049428 kernel: raid6: neonx2 gen() 12996 MB/s Mar 17 18:30:36.066422 kernel: raid6: neonx2 xor() 10230 MB/s Mar 17 18:30:36.083426 kernel: raid6: neonx1 gen() 10542 MB/s Mar 17 18:30:36.100427 kernel: raid6: neonx1 xor() 8783 MB/s Mar 17 18:30:36.117432 kernel: raid6: int64x8 gen() 6262 MB/s Mar 17 18:30:36.134438 kernel: raid6: int64x8 xor() 3536 MB/s Mar 17 18:30:36.151438 kernel: raid6: int64x4 gen() 7239 MB/s Mar 17 18:30:36.168442 kernel: raid6: int64x4 xor() 3848 MB/s Mar 17 18:30:36.185437 kernel: raid6: int64x2 gen() 6146 MB/s Mar 17 18:30:36.202441 kernel: raid6: int64x2 xor() 3320 MB/s Mar 17 18:30:36.219441 kernel: raid6: int64x1 gen() 5041 MB/s Mar 17 18:30:36.236590 kernel: raid6: int64x1 xor() 2645 MB/s Mar 17 18:30:36.236613 kernel: raid6: using algorithm neonx8 gen() 13807 MB/s Mar 17 18:30:36.236629 kernel: raid6: .... xor() 10822 MB/s, rmw enabled Mar 17 18:30:36.237741 kernel: raid6: using neon recovery algorithm Mar 17 18:30:36.248902 kernel: xor: measuring software checksum speed Mar 17 18:30:36.248916 kernel: 8regs : 16674 MB/sec Mar 17 18:30:36.249623 kernel: 32regs : 20702 MB/sec Mar 17 18:30:36.250970 kernel: arm64_neon : 27524 MB/sec Mar 17 18:30:36.250986 kernel: xor: using function: arm64_neon (27524 MB/sec) Mar 17 18:30:36.312429 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Mar 17 18:30:36.322123 systemd[1]: Finished dracut-pre-udev.service. Mar 17 18:30:36.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:36.325000 audit: BPF prog-id=7 op=LOAD Mar 17 18:30:36.325000 audit: BPF prog-id=8 op=LOAD Mar 17 18:30:36.326436 kernel: audit: type=1130 audit(1742236236.322:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:36.326645 systemd[1]: Starting systemd-udevd.service... Mar 17 18:30:36.344590 systemd-udevd[491]: Using default interface naming scheme 'v252'. Mar 17 18:30:36.347875 systemd[1]: Started systemd-udevd.service. Mar 17 18:30:36.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:36.349398 systemd[1]: Starting dracut-pre-trigger.service... Mar 17 18:30:36.361419 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation Mar 17 18:30:36.386331 systemd[1]: Finished dracut-pre-trigger.service. Mar 17 18:30:36.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:36.387958 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:30:36.420103 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:30:36.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:36.452315 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 17 18:30:36.456174 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 18:30:36.456189 kernel: GPT:9289727 != 19775487 Mar 17 18:30:36.456197 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 18:30:36.456206 kernel: GPT:9289727 != 19775487 Mar 17 18:30:36.456214 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 18:30:36.456222 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:30:36.480581 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Mar 17 18:30:36.483451 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (549) Mar 17 18:30:36.482075 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Mar 17 18:30:36.488691 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Mar 17 18:30:36.494151 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Mar 17 18:30:36.497585 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:30:36.499385 systemd[1]: Starting disk-uuid.service... Mar 17 18:30:36.506130 disk-uuid[563]: Primary Header is updated. Mar 17 18:30:36.506130 disk-uuid[563]: Secondary Entries is updated. Mar 17 18:30:36.506130 disk-uuid[563]: Secondary Header is updated. Mar 17 18:30:36.510432 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:30:37.520448 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 18:30:37.520712 disk-uuid[564]: The operation has completed successfully. Mar 17 18:30:37.542161 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 18:30:37.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.542272 systemd[1]: Finished disk-uuid.service. Mar 17 18:30:37.546563 systemd[1]: Starting verity-setup.service... Mar 17 18:30:37.575488 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Mar 17 18:30:37.596178 systemd[1]: Found device dev-mapper-usr.device. Mar 17 18:30:37.597830 systemd[1]: Mounting sysusr-usr.mount... Mar 17 18:30:37.598695 systemd[1]: Finished verity-setup.service. Mar 17 18:30:37.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.646421 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Mar 17 18:30:37.646660 systemd[1]: Mounted sysusr-usr.mount. Mar 17 18:30:37.647604 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Mar 17 18:30:37.648350 systemd[1]: Starting ignition-setup.service... Mar 17 18:30:37.650857 systemd[1]: Starting parse-ip-for-networkd.service... Mar 17 18:30:37.656906 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:30:37.656934 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:30:37.656947 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:30:37.665053 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 18:30:37.670198 systemd[1]: Finished ignition-setup.service. Mar 17 18:30:37.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.671718 systemd[1]: Starting ignition-fetch-offline.service... Mar 17 18:30:37.735689 systemd[1]: Finished parse-ip-for-networkd.service. Mar 17 18:30:37.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.736000 audit: BPF prog-id=9 op=LOAD Mar 17 18:30:37.737771 systemd[1]: Starting systemd-networkd.service... Mar 17 18:30:37.747166 ignition[643]: Ignition 2.14.0 Mar 17 18:30:37.747175 ignition[643]: Stage: fetch-offline Mar 17 18:30:37.747210 ignition[643]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:30:37.747218 ignition[643]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:30:37.747345 ignition[643]: parsed url from cmdline: "" Mar 17 18:30:37.747349 ignition[643]: no config URL provided Mar 17 18:30:37.747353 ignition[643]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 18:30:37.747360 ignition[643]: no config at "/usr/lib/ignition/user.ign" Mar 17 18:30:37.747377 ignition[643]: op(1): [started] loading QEMU firmware config module Mar 17 18:30:37.747381 ignition[643]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 17 18:30:37.756187 ignition[643]: op(1): [finished] loading QEMU firmware config module Mar 17 18:30:37.765007 systemd-networkd[738]: lo: Link UP Mar 17 18:30:37.765020 systemd-networkd[738]: lo: Gained carrier Mar 17 18:30:37.765378 systemd-networkd[738]: Enumeration completed Mar 17 18:30:37.765578 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:30:37.766633 systemd-networkd[738]: eth0: Link UP Mar 17 18:30:37.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.766637 systemd-networkd[738]: eth0: Gained carrier Mar 17 18:30:37.770100 systemd[1]: Started systemd-networkd.service. Mar 17 18:30:37.771183 systemd[1]: Reached target network.target. Mar 17 18:30:37.772756 systemd[1]: Starting iscsiuio.service... Mar 17 18:30:37.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.781840 systemd[1]: Started iscsiuio.service. Mar 17 18:30:37.783529 systemd[1]: Starting iscsid.service... Mar 17 18:30:37.789253 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:30:37.789253 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Mar 17 18:30:37.789253 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Mar 17 18:30:37.789253 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. Mar 17 18:30:37.789253 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Mar 17 18:30:37.789253 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Mar 17 18:30:37.797000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.791008 systemd[1]: Started iscsid.service. Mar 17 18:30:37.798378 systemd[1]: Starting dracut-initqueue.service... Mar 17 18:30:37.804487 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:30:37.808935 systemd[1]: Finished dracut-initqueue.service. Mar 17 18:30:37.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.809992 systemd[1]: Reached target remote-fs-pre.target. Mar 17 18:30:37.811660 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:30:37.813478 systemd[1]: Reached target remote-fs.target. Mar 17 18:30:37.815937 systemd[1]: Starting dracut-pre-mount.service... Mar 17 18:30:37.823285 systemd[1]: Finished dracut-pre-mount.service. Mar 17 18:30:37.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.826517 ignition[643]: parsing config with SHA512: 485e01efc55ac9001fbb22d91720bd69bb2aa4020609bea17cc6a67a67401f041d7041a3f09a0104e014a02cdb758a691f9b6873a8fc5e0a4c4b57d120f3db13 Mar 17 18:30:37.832808 unknown[643]: fetched base config from "system" Mar 17 18:30:37.832820 unknown[643]: fetched user config from "qemu" Mar 17 18:30:37.833365 ignition[643]: fetch-offline: fetch-offline passed Mar 17 18:30:37.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.834491 systemd[1]: Finished ignition-fetch-offline.service. Mar 17 18:30:37.833432 ignition[643]: Ignition finished successfully Mar 17 18:30:37.836140 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 17 18:30:37.836826 systemd[1]: Starting ignition-kargs.service... Mar 17 18:30:37.845546 ignition[759]: Ignition 2.14.0 Mar 17 18:30:37.845556 ignition[759]: Stage: kargs Mar 17 18:30:37.845645 ignition[759]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:30:37.845655 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:30:37.848000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.847760 systemd[1]: Finished ignition-kargs.service. Mar 17 18:30:37.846492 ignition[759]: kargs: kargs passed Mar 17 18:30:37.849993 systemd[1]: Starting ignition-disks.service... Mar 17 18:30:37.846533 ignition[759]: Ignition finished successfully Mar 17 18:30:37.855953 ignition[765]: Ignition 2.14.0 Mar 17 18:30:37.855961 ignition[765]: Stage: disks Mar 17 18:30:37.856044 ignition[765]: no configs at "/usr/lib/ignition/base.d" Mar 17 18:30:37.856053 ignition[765]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:30:37.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.857664 systemd[1]: Finished ignition-disks.service. Mar 17 18:30:37.856887 ignition[765]: disks: disks passed Mar 17 18:30:37.859110 systemd[1]: Reached target initrd-root-device.target. Mar 17 18:30:37.856923 ignition[765]: Ignition finished successfully Mar 17 18:30:37.860839 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:30:37.862278 systemd[1]: Reached target local-fs.target. Mar 17 18:30:37.863579 systemd[1]: Reached target sysinit.target. Mar 17 18:30:37.865061 systemd[1]: Reached target basic.target. Mar 17 18:30:37.867174 systemd[1]: Starting systemd-fsck-root.service... Mar 17 18:30:37.877653 systemd-fsck[773]: ROOT: clean, 623/553520 files, 56021/553472 blocks Mar 17 18:30:37.881485 systemd[1]: Finished systemd-fsck-root.service. Mar 17 18:30:37.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.883181 systemd[1]: Mounting sysroot.mount... Mar 17 18:30:37.891429 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Mar 17 18:30:37.891864 systemd[1]: Mounted sysroot.mount. Mar 17 18:30:37.892716 systemd[1]: Reached target initrd-root-fs.target. Mar 17 18:30:37.895034 systemd[1]: Mounting sysroot-usr.mount... Mar 17 18:30:37.895951 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Mar 17 18:30:37.895990 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 18:30:37.896013 systemd[1]: Reached target ignition-diskful.target. Mar 17 18:30:37.897928 systemd[1]: Mounted sysroot-usr.mount. Mar 17 18:30:37.899896 systemd[1]: Starting initrd-setup-root.service... Mar 17 18:30:37.904145 initrd-setup-root[783]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 18:30:37.908191 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory Mar 17 18:30:37.912451 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 18:30:37.916337 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 18:30:37.942344 systemd[1]: Finished initrd-setup-root.service. Mar 17 18:30:37.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.944017 systemd[1]: Starting ignition-mount.service... Mar 17 18:30:37.945362 systemd[1]: Starting sysroot-boot.service... Mar 17 18:30:37.949733 bash[825]: umount: /sysroot/usr/share/oem: not mounted. Mar 17 18:30:37.957345 ignition[827]: INFO : Ignition 2.14.0 Mar 17 18:30:37.957345 ignition[827]: INFO : Stage: mount Mar 17 18:30:37.957345 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:30:37.957345 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:30:37.961141 ignition[827]: INFO : mount: mount passed Mar 17 18:30:37.961141 ignition[827]: INFO : Ignition finished successfully Mar 17 18:30:37.961000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:37.960000 systemd[1]: Finished ignition-mount.service. Mar 17 18:30:37.962300 systemd[1]: Finished sysroot-boot.service. Mar 17 18:30:38.606279 systemd[1]: Mounting sysroot-usr-share-oem.mount... Mar 17 18:30:38.612427 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (835) Mar 17 18:30:38.612454 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Mar 17 18:30:38.614679 kernel: BTRFS info (device vda6): using free space tree Mar 17 18:30:38.614692 kernel: BTRFS info (device vda6): has skinny extents Mar 17 18:30:38.617492 systemd[1]: Mounted sysroot-usr-share-oem.mount. Mar 17 18:30:38.619141 systemd[1]: Starting ignition-files.service... Mar 17 18:30:38.631923 ignition[855]: INFO : Ignition 2.14.0 Mar 17 18:30:38.631923 ignition[855]: INFO : Stage: files Mar 17 18:30:38.633493 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:30:38.633493 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:30:38.633493 ignition[855]: DEBUG : files: compiled without relabeling support, skipping Mar 17 18:30:38.639084 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 18:30:38.639084 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 18:30:38.641961 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 18:30:38.641961 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 18:30:38.641961 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 18:30:38.641961 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:30:38.641961 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 18:30:38.641464 unknown[855]: wrote ssh authorized keys file for user: core Mar 17 18:30:38.650103 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:30:38.650103 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Mar 17 18:30:39.331635 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Mar 17 18:30:39.350558 systemd-networkd[738]: eth0: Gained IPv6LL Mar 17 18:30:41.123876 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Mar 17 18:30:41.125928 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Mar 17 18:30:41.127676 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 18:30:41.127676 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:30:41.127676 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 17 18:30:41.127676 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:30:41.127676 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 17 18:30:41.127676 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:30:41.127676 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 17 18:30:41.127676 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:30:41.127676 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 18:30:41.127676 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:30:41.127676 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:30:41.127676 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:30:41.127676 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 Mar 17 18:30:41.461319 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Mar 17 18:30:41.849208 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" Mar 17 18:30:41.849208 ignition[855]: INFO : files: op(c): [started] processing unit "containerd.service" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(c): [finished] processing unit "containerd.service" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Mar 17 18:30:41.852926 ignition[855]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:30:41.885483 ignition[855]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 17 18:30:41.887045 ignition[855]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Mar 17 18:30:41.887045 ignition[855]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:30:41.887045 ignition[855]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 18:30:41.887045 ignition[855]: INFO : files: files passed Mar 17 18:30:41.887045 ignition[855]: INFO : Ignition finished successfully Mar 17 18:30:41.904656 kernel: kauditd_printk_skb: 23 callbacks suppressed Mar 17 18:30:41.904682 kernel: audit: type=1130 audit(1742236241.887:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.904693 kernel: audit: type=1130 audit(1742236241.897:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.904702 kernel: audit: type=1130 audit(1742236241.901:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.904711 kernel: audit: type=1131 audit(1742236241.901:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.901000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.886696 systemd[1]: Finished ignition-files.service. Mar 17 18:30:41.888837 systemd[1]: Starting initrd-setup-root-after-ignition.service... Mar 17 18:30:41.893292 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Mar 17 18:30:41.911680 initrd-setup-root-after-ignition[878]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Mar 17 18:30:41.893952 systemd[1]: Starting ignition-quench.service... Mar 17 18:30:41.913919 initrd-setup-root-after-ignition[881]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 18:30:41.896372 systemd[1]: Finished initrd-setup-root-after-ignition.service. Mar 17 18:30:41.898156 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 18:30:41.898242 systemd[1]: Finished ignition-quench.service. Mar 17 18:30:41.901814 systemd[1]: Reached target ignition-complete.target. Mar 17 18:30:41.908224 systemd[1]: Starting initrd-parse-etc.service... Mar 17 18:30:41.920134 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 18:30:41.920222 systemd[1]: Finished initrd-parse-etc.service. Mar 17 18:30:41.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.921867 systemd[1]: Reached target initrd-fs.target. Mar 17 18:30:41.928166 kernel: audit: type=1130 audit(1742236241.921:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.928184 kernel: audit: type=1131 audit(1742236241.921:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.927576 systemd[1]: Reached target initrd.target. Mar 17 18:30:41.928866 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Mar 17 18:30:41.929507 systemd[1]: Starting dracut-pre-pivot.service... Mar 17 18:30:41.939414 systemd[1]: Finished dracut-pre-pivot.service. Mar 17 18:30:41.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.940837 systemd[1]: Starting initrd-cleanup.service... Mar 17 18:30:41.944457 kernel: audit: type=1130 audit(1742236241.939:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.948298 systemd[1]: Stopped target nss-lookup.target. Mar 17 18:30:41.949180 systemd[1]: Stopped target remote-cryptsetup.target. Mar 17 18:30:41.950599 systemd[1]: Stopped target timers.target. Mar 17 18:30:41.951925 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 18:30:41.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.952020 systemd[1]: Stopped dracut-pre-pivot.service. Mar 17 18:30:41.957511 kernel: audit: type=1131 audit(1742236241.952:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.953306 systemd[1]: Stopped target initrd.target. Mar 17 18:30:41.956956 systemd[1]: Stopped target basic.target. Mar 17 18:30:41.958220 systemd[1]: Stopped target ignition-complete.target. Mar 17 18:30:41.959546 systemd[1]: Stopped target ignition-diskful.target. Mar 17 18:30:41.960837 systemd[1]: Stopped target initrd-root-device.target. Mar 17 18:30:41.962303 systemd[1]: Stopped target remote-fs.target. Mar 17 18:30:41.963663 systemd[1]: Stopped target remote-fs-pre.target. Mar 17 18:30:41.965070 systemd[1]: Stopped target sysinit.target. Mar 17 18:30:41.966312 systemd[1]: Stopped target local-fs.target. Mar 17 18:30:41.967618 systemd[1]: Stopped target local-fs-pre.target. Mar 17 18:30:41.968894 systemd[1]: Stopped target swap.target. Mar 17 18:30:41.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.970107 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 18:30:41.975819 kernel: audit: type=1131 audit(1742236241.971:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.970211 systemd[1]: Stopped dracut-pre-mount.service. Mar 17 18:30:41.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.971638 systemd[1]: Stopped target cryptsetup.target. Mar 17 18:30:41.980947 kernel: audit: type=1131 audit(1742236241.976:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.975054 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 18:30:41.975145 systemd[1]: Stopped dracut-initqueue.service. Mar 17 18:30:41.976631 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 18:30:41.976723 systemd[1]: Stopped ignition-fetch-offline.service. Mar 17 18:30:41.980485 systemd[1]: Stopped target paths.target. Mar 17 18:30:41.981632 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 18:30:41.985444 systemd[1]: Stopped systemd-ask-password-console.path. Mar 17 18:30:41.986399 systemd[1]: Stopped target slices.target. Mar 17 18:30:41.987977 systemd[1]: Stopped target sockets.target. Mar 17 18:30:41.990000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.989324 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 18:30:41.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.989443 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Mar 17 18:30:41.994476 iscsid[744]: iscsid shutting down. Mar 17 18:30:41.990785 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 18:30:41.990872 systemd[1]: Stopped ignition-files.service. Mar 17 18:30:41.992758 systemd[1]: Stopping ignition-mount.service... Mar 17 18:30:41.993770 systemd[1]: Stopping iscsid.service... Mar 17 18:30:41.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.994936 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 18:30:42.000714 ignition[895]: INFO : Ignition 2.14.0 Mar 17 18:30:42.000714 ignition[895]: INFO : Stage: umount Mar 17 18:30:42.000714 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 18:30:42.000714 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 17 18:30:42.000714 ignition[895]: INFO : umount: umount passed Mar 17 18:30:42.000714 ignition[895]: INFO : Ignition finished successfully Mar 17 18:30:42.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:41.995047 systemd[1]: Stopped kmod-static-nodes.service. Mar 17 18:30:41.997005 systemd[1]: Stopping sysroot-boot.service... Mar 17 18:30:42.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.002099 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 18:30:42.010000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.002276 systemd[1]: Stopped systemd-udev-trigger.service. Mar 17 18:30:42.003608 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 18:30:42.013000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.003698 systemd[1]: Stopped dracut-pre-trigger.service. Mar 17 18:30:42.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.007447 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 18:30:42.016000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.007939 systemd[1]: iscsid.service: Deactivated successfully. Mar 17 18:30:42.008038 systemd[1]: Stopped iscsid.service. Mar 17 18:30:42.009847 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 18:30:42.009932 systemd[1]: Stopped ignition-mount.service. Mar 17 18:30:42.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.011281 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 18:30:42.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.011354 systemd[1]: Closed iscsid.socket. Mar 17 18:30:42.012439 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 18:30:42.012484 systemd[1]: Stopped ignition-disks.service. Mar 17 18:30:42.013904 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 18:30:42.013946 systemd[1]: Stopped ignition-kargs.service. Mar 17 18:30:42.015513 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 18:30:42.015552 systemd[1]: Stopped ignition-setup.service. Mar 17 18:30:42.016937 systemd[1]: Stopping iscsiuio.service... Mar 17 18:30:42.020605 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 18:30:42.020692 systemd[1]: Finished initrd-cleanup.service. Mar 17 18:30:42.022242 systemd[1]: iscsiuio.service: Deactivated successfully. Mar 17 18:30:42.022321 systemd[1]: Stopped iscsiuio.service. Mar 17 18:30:42.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.024060 systemd[1]: Stopped target network.target. Mar 17 18:30:42.024878 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 18:30:42.024912 systemd[1]: Closed iscsiuio.socket. Mar 17 18:30:42.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.026366 systemd[1]: Stopping systemd-networkd.service... Mar 17 18:30:42.040000 audit: BPF prog-id=6 op=UNLOAD Mar 17 18:30:42.027607 systemd[1]: Stopping systemd-resolved.service... Mar 17 18:30:42.034808 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 18:30:42.044000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.034902 systemd[1]: Stopped systemd-resolved.service. Mar 17 18:30:42.045000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.036533 systemd-networkd[738]: eth0: DHCPv6 lease lost Mar 17 18:30:42.047000 audit: BPF prog-id=9 op=UNLOAD Mar 17 18:30:42.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.037793 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 18:30:42.037883 systemd[1]: Stopped systemd-networkd.service. Mar 17 18:30:42.039507 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 18:30:42.039534 systemd[1]: Closed systemd-networkd.socket. Mar 17 18:30:42.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.041275 systemd[1]: Stopping network-cleanup.service... Mar 17 18:30:42.042338 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 18:30:42.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.042394 systemd[1]: Stopped parse-ip-for-networkd.service. Mar 17 18:30:42.044831 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 18:30:42.044873 systemd[1]: Stopped systemd-sysctl.service. Mar 17 18:30:42.047132 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 18:30:42.047171 systemd[1]: Stopped systemd-modules-load.service. Mar 17 18:30:42.048174 systemd[1]: Stopping systemd-udevd.service... Mar 17 18:30:42.052665 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 17 18:30:42.053133 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 18:30:42.053237 systemd[1]: Stopped sysroot-boot.service. Mar 17 18:30:42.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.054817 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 18:30:42.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.054861 systemd[1]: Stopped initrd-setup-root.service. Mar 17 18:30:42.063514 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 18:30:42.063640 systemd[1]: Stopped systemd-udevd.service. Mar 17 18:30:42.070000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.065278 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 18:30:42.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.065357 systemd[1]: Stopped network-cleanup.service. Mar 17 18:30:42.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.066472 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 18:30:42.066506 systemd[1]: Closed systemd-udevd-control.socket. Mar 17 18:30:42.067986 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 18:30:42.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.068015 systemd[1]: Closed systemd-udevd-kernel.socket. Mar 17 18:30:42.069334 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 18:30:42.069379 systemd[1]: Stopped dracut-pre-udev.service. Mar 17 18:30:42.081000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.081000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:42.070930 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 18:30:42.070975 systemd[1]: Stopped dracut-cmdline.service. Mar 17 18:30:42.072223 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 18:30:42.072262 systemd[1]: Stopped dracut-cmdline-ask.service. Mar 17 18:30:42.074558 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Mar 17 18:30:42.076206 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 18:30:42.076280 systemd[1]: Stopped systemd-vconsole-setup.service. Mar 17 18:30:42.079663 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 18:30:42.079747 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Mar 17 18:30:42.081390 systemd[1]: Reached target initrd-switch-root.target. Mar 17 18:30:42.091000 audit: BPF prog-id=5 op=UNLOAD Mar 17 18:30:42.091000 audit: BPF prog-id=4 op=UNLOAD Mar 17 18:30:42.091000 audit: BPF prog-id=3 op=UNLOAD Mar 17 18:30:42.091000 audit: BPF prog-id=8 op=UNLOAD Mar 17 18:30:42.091000 audit: BPF prog-id=7 op=UNLOAD Mar 17 18:30:42.083479 systemd[1]: Starting initrd-switch-root.service... Mar 17 18:30:42.088885 systemd[1]: Switching root. Mar 17 18:30:42.099843 systemd-journald[289]: Journal stopped Mar 17 18:30:44.099298 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Mar 17 18:30:44.099351 kernel: SELinux: Class mctp_socket not defined in policy. Mar 17 18:30:44.099364 kernel: SELinux: Class anon_inode not defined in policy. Mar 17 18:30:44.099374 kernel: SELinux: the above unknown classes and permissions will be allowed Mar 17 18:30:44.099383 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 18:30:44.099392 kernel: SELinux: policy capability open_perms=1 Mar 17 18:30:44.099402 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 18:30:44.099421 kernel: SELinux: policy capability always_check_network=0 Mar 17 18:30:44.099432 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 18:30:44.099443 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 18:30:44.099453 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 18:30:44.099462 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 18:30:44.099471 systemd[1]: Successfully loaded SELinux policy in 34.043ms. Mar 17 18:30:44.099490 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.693ms. Mar 17 18:30:44.099506 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Mar 17 18:30:44.099517 systemd[1]: Detected virtualization kvm. Mar 17 18:30:44.099527 systemd[1]: Detected architecture arm64. Mar 17 18:30:44.099543 systemd[1]: Detected first boot. Mar 17 18:30:44.099558 systemd[1]: Initializing machine ID from VM UUID. Mar 17 18:30:44.099568 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Mar 17 18:30:44.099580 systemd[1]: Populated /etc with preset unit settings. Mar 17 18:30:44.099592 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:30:44.099603 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:30:44.099614 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:30:44.099626 systemd[1]: Queued start job for default target multi-user.target. Mar 17 18:30:44.099636 systemd[1]: Unnecessary job was removed for dev-vda6.device. Mar 17 18:30:44.099646 systemd[1]: Created slice system-addon\x2dconfig.slice. Mar 17 18:30:44.099656 systemd[1]: Created slice system-addon\x2drun.slice. Mar 17 18:30:44.099666 systemd[1]: Created slice system-getty.slice. Mar 17 18:30:44.099676 systemd[1]: Created slice system-modprobe.slice. Mar 17 18:30:44.099687 systemd[1]: Created slice system-serial\x2dgetty.slice. Mar 17 18:30:44.099698 systemd[1]: Created slice system-system\x2dcloudinit.slice. Mar 17 18:30:44.099712 systemd[1]: Created slice system-systemd\x2dfsck.slice. Mar 17 18:30:44.099722 systemd[1]: Created slice user.slice. Mar 17 18:30:44.099732 systemd[1]: Started systemd-ask-password-console.path. Mar 17 18:30:44.099742 systemd[1]: Started systemd-ask-password-wall.path. Mar 17 18:30:44.099753 systemd[1]: Set up automount boot.automount. Mar 17 18:30:44.099763 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Mar 17 18:30:44.099776 systemd[1]: Reached target integritysetup.target. Mar 17 18:30:44.099787 systemd[1]: Reached target remote-cryptsetup.target. Mar 17 18:30:44.099798 systemd[1]: Reached target remote-fs.target. Mar 17 18:30:44.099808 systemd[1]: Reached target slices.target. Mar 17 18:30:44.099818 systemd[1]: Reached target swap.target. Mar 17 18:30:44.099828 systemd[1]: Reached target torcx.target. Mar 17 18:30:44.099838 systemd[1]: Reached target veritysetup.target. Mar 17 18:30:44.099853 systemd[1]: Listening on systemd-coredump.socket. Mar 17 18:30:44.099862 systemd[1]: Listening on systemd-initctl.socket. Mar 17 18:30:44.099873 systemd[1]: Listening on systemd-journald-audit.socket. Mar 17 18:30:44.099886 systemd[1]: Listening on systemd-journald-dev-log.socket. Mar 17 18:30:44.099900 systemd[1]: Listening on systemd-journald.socket. Mar 17 18:30:44.099911 systemd[1]: Listening on systemd-networkd.socket. Mar 17 18:30:44.099921 systemd[1]: Listening on systemd-udevd-control.socket. Mar 17 18:30:44.099931 systemd[1]: Listening on systemd-udevd-kernel.socket. Mar 17 18:30:44.099942 systemd[1]: Listening on systemd-userdbd.socket. Mar 17 18:30:44.099951 systemd[1]: Mounting dev-hugepages.mount... Mar 17 18:30:44.099961 systemd[1]: Mounting dev-mqueue.mount... Mar 17 18:30:44.099971 systemd[1]: Mounting media.mount... Mar 17 18:30:44.099981 systemd[1]: Mounting sys-kernel-debug.mount... Mar 17 18:30:44.099992 systemd[1]: Mounting sys-kernel-tracing.mount... Mar 17 18:30:44.100002 systemd[1]: Mounting tmp.mount... Mar 17 18:30:44.100012 systemd[1]: Starting flatcar-tmpfiles.service... Mar 17 18:30:44.100023 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:30:44.100033 systemd[1]: Starting kmod-static-nodes.service... Mar 17 18:30:44.100048 systemd[1]: Starting modprobe@configfs.service... Mar 17 18:30:44.100062 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:30:44.100073 systemd[1]: Starting modprobe@drm.service... Mar 17 18:30:44.100083 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:30:44.100094 systemd[1]: Starting modprobe@fuse.service... Mar 17 18:30:44.100104 systemd[1]: Starting modprobe@loop.service... Mar 17 18:30:44.100114 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 18:30:44.100126 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 18:30:44.100137 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Mar 17 18:30:44.100146 kernel: fuse: init (API version 7.34) Mar 17 18:30:44.100155 systemd[1]: Starting systemd-journald.service... Mar 17 18:30:44.100166 systemd[1]: Starting systemd-modules-load.service... Mar 17 18:30:44.100181 systemd[1]: Starting systemd-network-generator.service... Mar 17 18:30:44.100194 systemd[1]: Starting systemd-remount-fs.service... Mar 17 18:30:44.100204 kernel: loop: module loaded Mar 17 18:30:44.100213 systemd[1]: Starting systemd-udev-trigger.service... Mar 17 18:30:44.100223 systemd[1]: Mounted dev-hugepages.mount. Mar 17 18:30:44.100235 systemd-journald[1021]: Journal started Mar 17 18:30:44.100274 systemd-journald[1021]: Runtime Journal (/run/log/journal/31d4b345d6c54c80b72571a197f1110c) is 6.0M, max 48.7M, 42.6M free. Mar 17 18:30:44.010000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Mar 17 18:30:44.010000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Mar 17 18:30:44.098000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Mar 17 18:30:44.098000 audit[1021]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffc7012510 a2=4000 a3=1 items=0 ppid=1 pid=1021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:44.098000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Mar 17 18:30:44.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.102473 systemd[1]: Started systemd-journald.service. Mar 17 18:30:44.103044 systemd[1]: Mounted dev-mqueue.mount. Mar 17 18:30:44.104257 systemd[1]: Mounted media.mount. Mar 17 18:30:44.105073 systemd[1]: Mounted sys-kernel-debug.mount. Mar 17 18:30:44.105989 systemd[1]: Mounted sys-kernel-tracing.mount. Mar 17 18:30:44.106865 systemd[1]: Mounted tmp.mount. Mar 17 18:30:44.108819 systemd[1]: Finished kmod-static-nodes.service. Mar 17 18:30:44.109893 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 18:30:44.110115 systemd[1]: Finished modprobe@configfs.service. Mar 17 18:30:44.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.111195 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:30:44.111391 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:30:44.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.112425 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:30:44.112622 systemd[1]: Finished modprobe@drm.service. Mar 17 18:30:44.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.112000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.113618 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:30:44.113803 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:30:44.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.114957 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 18:30:44.115151 systemd[1]: Finished modprobe@fuse.service. Mar 17 18:30:44.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.116222 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:30:44.116427 systemd[1]: Finished modprobe@loop.service. Mar 17 18:30:44.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.117599 systemd[1]: Finished systemd-modules-load.service. Mar 17 18:30:44.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.118837 systemd[1]: Finished systemd-network-generator.service. Mar 17 18:30:44.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.121806 systemd[1]: Finished systemd-remount-fs.service. Mar 17 18:30:44.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.122998 systemd[1]: Reached target network-pre.target. Mar 17 18:30:44.124881 systemd[1]: Mounting sys-fs-fuse-connections.mount... Mar 17 18:30:44.126695 systemd[1]: Mounting sys-kernel-config.mount... Mar 17 18:30:44.127528 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 18:30:44.131025 systemd[1]: Starting systemd-hwdb-update.service... Mar 17 18:30:44.132794 systemd[1]: Starting systemd-journal-flush.service... Mar 17 18:30:44.133708 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:30:44.134713 systemd[1]: Starting systemd-random-seed.service... Mar 17 18:30:44.135642 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:30:44.136823 systemd[1]: Starting systemd-sysctl.service... Mar 17 18:30:44.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.141945 systemd-journald[1021]: Time spent on flushing to /var/log/journal/31d4b345d6c54c80b72571a197f1110c is 21.777ms for 931 entries. Mar 17 18:30:44.141945 systemd-journald[1021]: System Journal (/var/log/journal/31d4b345d6c54c80b72571a197f1110c) is 8.0M, max 195.6M, 187.6M free. Mar 17 18:30:44.171392 systemd-journald[1021]: Received client request to flush runtime journal. Mar 17 18:30:44.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.166000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.139542 systemd[1]: Finished flatcar-tmpfiles.service. Mar 17 18:30:44.140507 systemd[1]: Mounted sys-fs-fuse-connections.mount. Mar 17 18:30:44.141465 systemd[1]: Mounted sys-kernel-config.mount. Mar 17 18:30:44.171884 udevadm[1074]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 18:30:44.144345 systemd[1]: Starting systemd-sysusers.service... Mar 17 18:30:44.148008 systemd[1]: Finished systemd-udev-trigger.service. Mar 17 18:30:44.150027 systemd[1]: Starting systemd-udev-settle.service... Mar 17 18:30:44.162761 systemd[1]: Finished systemd-random-seed.service. Mar 17 18:30:44.166853 systemd[1]: Finished systemd-sysctl.service. Mar 17 18:30:44.167944 systemd[1]: Finished systemd-sysusers.service. Mar 17 18:30:44.168924 systemd[1]: Reached target first-boot-complete.target. Mar 17 18:30:44.170920 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Mar 17 18:30:44.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.173647 systemd[1]: Finished systemd-journal-flush.service. Mar 17 18:30:44.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.195544 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Mar 17 18:30:44.502878 systemd[1]: Finished systemd-hwdb-update.service. Mar 17 18:30:44.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.504995 systemd[1]: Starting systemd-udevd.service... Mar 17 18:30:44.521901 systemd-udevd[1085]: Using default interface naming scheme 'v252'. Mar 17 18:30:44.534028 systemd[1]: Started systemd-udevd.service. Mar 17 18:30:44.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.536448 systemd[1]: Starting systemd-networkd.service... Mar 17 18:30:44.544069 systemd[1]: Starting systemd-userdbd.service... Mar 17 18:30:44.549331 systemd[1]: Found device dev-ttyAMA0.device. Mar 17 18:30:44.583262 systemd[1]: Started systemd-userdbd.service. Mar 17 18:30:44.583000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.589065 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Mar 17 18:30:44.635687 systemd-networkd[1094]: lo: Link UP Mar 17 18:30:44.635701 systemd-networkd[1094]: lo: Gained carrier Mar 17 18:30:44.636022 systemd-networkd[1094]: Enumeration completed Mar 17 18:30:44.636125 systemd-networkd[1094]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 18:30:44.636137 systemd[1]: Started systemd-networkd.service. Mar 17 18:30:44.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.642680 systemd-networkd[1094]: eth0: Link UP Mar 17 18:30:44.642692 systemd-networkd[1094]: eth0: Gained carrier Mar 17 18:30:44.652859 systemd[1]: Finished systemd-udev-settle.service. Mar 17 18:30:44.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.654946 systemd[1]: Starting lvm2-activation-early.service... Mar 17 18:30:44.664691 lvm[1119]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:30:44.668504 systemd-networkd[1094]: eth0: DHCPv4 address 10.0.0.124/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 17 18:30:44.693367 systemd[1]: Finished lvm2-activation-early.service. Mar 17 18:30:44.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.694522 systemd[1]: Reached target cryptsetup.target. Mar 17 18:30:44.696384 systemd[1]: Starting lvm2-activation.service... Mar 17 18:30:44.699799 lvm[1121]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 18:30:44.734246 systemd[1]: Finished lvm2-activation.service. Mar 17 18:30:44.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.735198 systemd[1]: Reached target local-fs-pre.target. Mar 17 18:30:44.736043 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 18:30:44.736074 systemd[1]: Reached target local-fs.target. Mar 17 18:30:44.736850 systemd[1]: Reached target machines.target. Mar 17 18:30:44.742233 systemd[1]: Starting ldconfig.service... Mar 17 18:30:44.743379 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:30:44.743465 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:30:44.745916 systemd[1]: Starting systemd-boot-update.service... Mar 17 18:30:44.749460 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Mar 17 18:30:44.755182 systemd[1]: Starting systemd-machine-id-commit.service... Mar 17 18:30:44.757548 systemd[1]: Starting systemd-sysext.service... Mar 17 18:30:44.761148 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1124 (bootctl) Mar 17 18:30:44.762114 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Mar 17 18:30:44.771539 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Mar 17 18:30:44.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.783273 systemd[1]: Unmounting usr-share-oem.mount... Mar 17 18:30:44.787808 systemd[1]: usr-share-oem.mount: Deactivated successfully. Mar 17 18:30:44.788037 systemd[1]: Unmounted usr-share-oem.mount. Mar 17 18:30:44.825722 systemd[1]: Finished systemd-machine-id-commit.service. Mar 17 18:30:44.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.828443 kernel: loop0: detected capacity change from 0 to 194096 Mar 17 18:30:44.842431 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 18:30:44.845354 systemd-fsck[1136]: fsck.fat 4.2 (2021-01-31) Mar 17 18:30:44.845354 systemd-fsck[1136]: /dev/vda1: 236 files, 117179/258078 clusters Mar 17 18:30:44.847711 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Mar 17 18:30:44.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.857499 kernel: loop1: detected capacity change from 0 to 194096 Mar 17 18:30:44.862151 (sd-sysext)[1143]: Using extensions 'kubernetes'. Mar 17 18:30:44.862492 (sd-sysext)[1143]: Merged extensions into '/usr'. Mar 17 18:30:44.876472 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:30:44.877632 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:30:44.879580 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:30:44.881478 systemd[1]: Starting modprobe@loop.service... Mar 17 18:30:44.882419 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:30:44.882566 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:30:44.883319 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:30:44.883501 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:30:44.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.884857 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:30:44.885064 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:30:44.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.887080 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:30:44.887237 systemd[1]: Finished modprobe@loop.service. Mar 17 18:30:44.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:44.888620 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:30:44.888715 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:30:44.951658 ldconfig[1123]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 18:30:44.955734 systemd[1]: Finished ldconfig.service. Mar 17 18:30:44.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.099117 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 18:30:45.100953 systemd[1]: Mounting boot.mount... Mar 17 18:30:45.102839 systemd[1]: Mounting usr-share-oem.mount... Mar 17 18:30:45.107601 systemd[1]: Mounted usr-share-oem.mount. Mar 17 18:30:45.109554 systemd[1]: Finished systemd-sysext.service. Mar 17 18:30:45.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.111779 systemd[1]: Mounted boot.mount. Mar 17 18:30:45.113763 systemd[1]: Starting ensure-sysext.service... Mar 17 18:30:45.115949 systemd[1]: Starting systemd-tmpfiles-setup.service... Mar 17 18:30:45.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.120746 systemd[1]: Finished systemd-boot-update.service. Mar 17 18:30:45.121806 systemd[1]: Reloading. Mar 17 18:30:45.125321 systemd-tmpfiles[1160]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Mar 17 18:30:45.125997 systemd-tmpfiles[1160]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 18:30:45.127301 systemd-tmpfiles[1160]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 18:30:45.164116 /usr/lib/systemd/system-generators/torcx-generator[1181]: time="2025-03-17T18:30:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:30:45.164502 /usr/lib/systemd/system-generators/torcx-generator[1181]: time="2025-03-17T18:30:45Z" level=info msg="torcx already run" Mar 17 18:30:45.226943 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:30:45.226964 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:30:45.242953 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:30:45.297930 systemd[1]: Finished systemd-tmpfiles-setup.service. Mar 17 18:30:45.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.301213 systemd[1]: Starting audit-rules.service... Mar 17 18:30:45.303188 systemd[1]: Starting clean-ca-certificates.service... Mar 17 18:30:45.305289 systemd[1]: Starting systemd-journal-catalog-update.service... Mar 17 18:30:45.307864 systemd[1]: Starting systemd-resolved.service... Mar 17 18:30:45.310216 systemd[1]: Starting systemd-timesyncd.service... Mar 17 18:30:45.312516 systemd[1]: Starting systemd-update-utmp.service... Mar 17 18:30:45.314119 systemd[1]: Finished clean-ca-certificates.service. Mar 17 18:30:45.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.318698 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:30:45.320113 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:30:45.320000 audit[1239]: SYSTEM_BOOT pid=1239 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.322079 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:30:45.323972 systemd[1]: Starting modprobe@loop.service... Mar 17 18:30:45.324803 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:30:45.324937 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:30:45.325042 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:30:45.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.326033 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:30:45.326186 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:30:45.327622 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:30:45.327753 systemd[1]: Finished modprobe@loop.service. Mar 17 18:30:45.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.331513 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:30:45.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.333294 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:30:45.333554 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:30:45.334952 systemd[1]: Finished systemd-journal-catalog-update.service. Mar 17 18:30:45.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.336558 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:30:45.337907 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:30:45.339980 systemd[1]: Starting modprobe@loop.service... Mar 17 18:30:45.340787 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:30:45.340914 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:30:45.342302 systemd[1]: Starting systemd-update-done.service... Mar 17 18:30:45.343191 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:30:45.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.344499 systemd[1]: Finished systemd-update-utmp.service. Mar 17 18:30:45.345745 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:30:45.345872 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:30:45.347120 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:30:45.347282 systemd[1]: Finished modprobe@loop.service. Mar 17 18:30:45.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.351305 systemd[1]: Finished systemd-update-done.service. Mar 17 18:30:45.351000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.352745 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Mar 17 18:30:45.354111 systemd[1]: Starting modprobe@dm_mod.service... Mar 17 18:30:45.356173 systemd[1]: Starting modprobe@drm.service... Mar 17 18:30:45.358129 systemd[1]: Starting modprobe@efi_pstore.service... Mar 17 18:30:45.360125 systemd[1]: Starting modprobe@loop.service... Mar 17 18:30:45.361021 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Mar 17 18:30:45.361153 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:30:45.362522 systemd[1]: Starting systemd-networkd-wait-online.service... Mar 17 18:30:45.363532 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 18:30:45.364724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 18:30:45.364859 systemd[1]: Finished modprobe@dm_mod.service. Mar 17 18:30:45.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.366138 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 18:30:45.366385 systemd[1]: Finished modprobe@drm.service. Mar 17 18:30:45.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.367659 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 18:30:45.367870 systemd[1]: Finished modprobe@efi_pstore.service. Mar 17 18:30:45.369152 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 18:30:45.369318 systemd[1]: Finished modprobe@loop.service. Mar 17 18:30:45.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.370731 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 18:30:45.370815 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Mar 17 18:30:45.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:45.371868 systemd[1]: Finished ensure-sysext.service. Mar 17 18:30:45.380000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Mar 17 18:30:45.380000 audit[1277]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc373fe0 a2=420 a3=0 items=0 ppid=1227 pid=1277 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:45.380000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Mar 17 18:30:45.380931 augenrules[1277]: No rules Mar 17 18:30:45.382032 systemd[1]: Finished audit-rules.service. Mar 17 18:30:45.388668 systemd[1]: Started systemd-timesyncd.service. Mar 17 18:30:45.389723 systemd-timesyncd[1233]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 17 18:30:45.389776 systemd-timesyncd[1233]: Initial clock synchronization to Mon 2025-03-17 18:30:45.132225 UTC. Mar 17 18:30:45.389886 systemd[1]: Reached target time-set.target. Mar 17 18:30:45.402829 systemd-resolved[1232]: Positive Trust Anchors: Mar 17 18:30:45.402842 systemd-resolved[1232]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 18:30:45.402868 systemd-resolved[1232]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Mar 17 18:30:45.412757 systemd-resolved[1232]: Defaulting to hostname 'linux'. Mar 17 18:30:45.414236 systemd[1]: Started systemd-resolved.service. Mar 17 18:30:45.415144 systemd[1]: Reached target network.target. Mar 17 18:30:45.415940 systemd[1]: Reached target nss-lookup.target. Mar 17 18:30:45.416803 systemd[1]: Reached target sysinit.target. Mar 17 18:30:45.417664 systemd[1]: Started motdgen.path. Mar 17 18:30:45.418387 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Mar 17 18:30:45.419650 systemd[1]: Started logrotate.timer. Mar 17 18:30:45.420519 systemd[1]: Started mdadm.timer. Mar 17 18:30:45.421202 systemd[1]: Started systemd-tmpfiles-clean.timer. Mar 17 18:30:45.422071 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 18:30:45.422101 systemd[1]: Reached target paths.target. Mar 17 18:30:45.422855 systemd[1]: Reached target timers.target. Mar 17 18:30:45.423932 systemd[1]: Listening on dbus.socket. Mar 17 18:30:45.425838 systemd[1]: Starting docker.socket... Mar 17 18:30:45.427558 systemd[1]: Listening on sshd.socket. Mar 17 18:30:45.428439 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:30:45.428767 systemd[1]: Listening on docker.socket. Mar 17 18:30:45.429581 systemd[1]: Reached target sockets.target. Mar 17 18:30:45.430355 systemd[1]: Reached target basic.target. Mar 17 18:30:45.431270 systemd[1]: System is tainted: cgroupsv1 Mar 17 18:30:45.431319 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:30:45.431338 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Mar 17 18:30:45.432338 systemd[1]: Starting containerd.service... Mar 17 18:30:45.434128 systemd[1]: Starting dbus.service... Mar 17 18:30:45.435993 systemd[1]: Starting enable-oem-cloudinit.service... Mar 17 18:30:45.438130 systemd[1]: Starting extend-filesystems.service... Mar 17 18:30:45.439008 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Mar 17 18:30:45.440337 systemd[1]: Starting motdgen.service... Mar 17 18:30:45.442510 systemd[1]: Starting prepare-helm.service... Mar 17 18:30:45.444565 systemd[1]: Starting ssh-key-proc-cmdline.service... Mar 17 18:30:45.446587 systemd[1]: Starting sshd-keygen.service... Mar 17 18:30:45.449128 systemd[1]: Starting systemd-logind.service... Mar 17 18:30:45.450063 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Mar 17 18:30:45.450133 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 18:30:45.451479 systemd[1]: Starting update-engine.service... Mar 17 18:30:45.454268 jq[1289]: false Mar 17 18:30:45.459474 extend-filesystems[1290]: Found loop1 Mar 17 18:30:45.459474 extend-filesystems[1290]: Found vda Mar 17 18:30:45.459474 extend-filesystems[1290]: Found vda1 Mar 17 18:30:45.459474 extend-filesystems[1290]: Found vda2 Mar 17 18:30:45.459474 extend-filesystems[1290]: Found vda3 Mar 17 18:30:45.459474 extend-filesystems[1290]: Found usr Mar 17 18:30:45.459474 extend-filesystems[1290]: Found vda4 Mar 17 18:30:45.459474 extend-filesystems[1290]: Found vda6 Mar 17 18:30:45.459474 extend-filesystems[1290]: Found vda7 Mar 17 18:30:45.459474 extend-filesystems[1290]: Found vda9 Mar 17 18:30:45.459474 extend-filesystems[1290]: Checking size of /dev/vda9 Mar 17 18:30:45.456843 systemd[1]: Starting update-ssh-keys-after-ignition.service... Mar 17 18:30:45.489597 jq[1304]: true Mar 17 18:30:45.460850 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 18:30:45.461076 systemd[1]: Finished ssh-key-proc-cmdline.service. Mar 17 18:30:45.494714 tar[1308]: linux-arm64/helm Mar 17 18:30:45.477951 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 18:30:45.495731 jq[1311]: true Mar 17 18:30:45.478213 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Mar 17 18:30:45.499750 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 18:30:45.499969 systemd[1]: Finished motdgen.service. Mar 17 18:30:45.501578 extend-filesystems[1290]: Resized partition /dev/vda9 Mar 17 18:30:45.506327 dbus-daemon[1288]: [system] SELinux support is enabled Mar 17 18:30:45.515802 extend-filesystems[1334]: resize2fs 1.46.5 (30-Dec-2021) Mar 17 18:30:45.506563 systemd[1]: Started dbus.service. Mar 17 18:30:45.510035 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 18:30:45.510054 systemd[1]: Reached target system-config.target. Mar 17 18:30:45.510995 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 18:30:45.511010 systemd[1]: Reached target user-config.target. Mar 17 18:30:45.527432 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 17 18:30:45.565438 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 17 18:30:45.571601 update_engine[1301]: I0317 18:30:45.568562 1301 main.cc:92] Flatcar Update Engine starting Mar 17 18:30:45.582926 update_engine[1301]: I0317 18:30:45.579124 1301 update_check_scheduler.cc:74] Next update check in 8m31s Mar 17 18:30:45.579112 systemd[1]: Started update-engine.service. Mar 17 18:30:45.581707 systemd[1]: Started locksmithd.service. Mar 17 18:30:45.583131 systemd-logind[1300]: Watching system buttons on /dev/input/event0 (Power Button) Mar 17 18:30:45.583309 systemd-logind[1300]: New seat seat0. Mar 17 18:30:45.584807 extend-filesystems[1334]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 18:30:45.584807 extend-filesystems[1334]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 17 18:30:45.584807 extend-filesystems[1334]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 17 18:30:45.591076 extend-filesystems[1290]: Resized filesystem in /dev/vda9 Mar 17 18:30:45.586116 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 18:30:45.592638 bash[1347]: Updated "/home/core/.ssh/authorized_keys" Mar 17 18:30:45.586458 systemd[1]: Finished extend-filesystems.service. Mar 17 18:30:45.588725 systemd[1]: Finished update-ssh-keys-after-ignition.service. Mar 17 18:30:45.590269 systemd[1]: Started systemd-logind.service. Mar 17 18:30:45.624849 env[1316]: time="2025-03-17T18:30:45.624764640Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Mar 17 18:30:45.642028 env[1316]: time="2025-03-17T18:30:45.641988880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 18:30:45.642148 env[1316]: time="2025-03-17T18:30:45.642126880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:30:45.645330 env[1316]: time="2025-03-17T18:30:45.645295440Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.179-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:30:45.645330 env[1316]: time="2025-03-17T18:30:45.645328800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:30:45.645589 env[1316]: time="2025-03-17T18:30:45.645565560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:30:45.645635 env[1316]: time="2025-03-17T18:30:45.645589440Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 18:30:45.645635 env[1316]: time="2025-03-17T18:30:45.645603480Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Mar 17 18:30:45.645635 env[1316]: time="2025-03-17T18:30:45.645613720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 18:30:45.645703 env[1316]: time="2025-03-17T18:30:45.645686000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:30:45.645900 env[1316]: time="2025-03-17T18:30:45.645876400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 18:30:45.646070 env[1316]: time="2025-03-17T18:30:45.646042880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 18:30:45.646070 env[1316]: time="2025-03-17T18:30:45.646064680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 18:30:45.646131 env[1316]: time="2025-03-17T18:30:45.646115440Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Mar 17 18:30:45.646131 env[1316]: time="2025-03-17T18:30:45.646128400Z" level=info msg="metadata content store policy set" policy=shared Mar 17 18:30:45.649493 env[1316]: time="2025-03-17T18:30:45.649460000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 18:30:45.649579 env[1316]: time="2025-03-17T18:30:45.649499120Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 18:30:45.649579 env[1316]: time="2025-03-17T18:30:45.649514280Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 18:30:45.649579 env[1316]: time="2025-03-17T18:30:45.649543840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 18:30:45.649579 env[1316]: time="2025-03-17T18:30:45.649558280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 18:30:45.649579 env[1316]: time="2025-03-17T18:30:45.649571320Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 18:30:45.649672 env[1316]: time="2025-03-17T18:30:45.649585520Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 18:30:45.649930 env[1316]: time="2025-03-17T18:30:45.649911760Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 18:30:45.649979 env[1316]: time="2025-03-17T18:30:45.649936560Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Mar 17 18:30:45.649979 env[1316]: time="2025-03-17T18:30:45.649952040Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 18:30:45.649979 env[1316]: time="2025-03-17T18:30:45.649965400Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 18:30:45.649979 env[1316]: time="2025-03-17T18:30:45.649978240Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 18:30:45.650117 env[1316]: time="2025-03-17T18:30:45.650093000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 18:30:45.650201 env[1316]: time="2025-03-17T18:30:45.650182920Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 18:30:45.650714 env[1316]: time="2025-03-17T18:30:45.650674440Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 18:30:45.650778 env[1316]: time="2025-03-17T18:30:45.650759640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 18:30:45.650807 env[1316]: time="2025-03-17T18:30:45.650778920Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 18:30:45.650910 env[1316]: time="2025-03-17T18:30:45.650893960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 18:30:45.650944 env[1316]: time="2025-03-17T18:30:45.650911880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 18:30:45.650944 env[1316]: time="2025-03-17T18:30:45.650925720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 18:30:45.650944 env[1316]: time="2025-03-17T18:30:45.650936800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 18:30:45.651005 env[1316]: time="2025-03-17T18:30:45.650948840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 18:30:45.651005 env[1316]: time="2025-03-17T18:30:45.650961120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 18:30:45.651005 env[1316]: time="2025-03-17T18:30:45.650971760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 18:30:45.651005 env[1316]: time="2025-03-17T18:30:45.650982960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 18:30:45.651005 env[1316]: time="2025-03-17T18:30:45.650996280Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 18:30:45.651151 env[1316]: time="2025-03-17T18:30:45.651130360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 18:30:45.651193 env[1316]: time="2025-03-17T18:30:45.651153680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 18:30:45.651193 env[1316]: time="2025-03-17T18:30:45.651177040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 18:30:45.651193 env[1316]: time="2025-03-17T18:30:45.651187960Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 18:30:45.651248 env[1316]: time="2025-03-17T18:30:45.651202080Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Mar 17 18:30:45.651248 env[1316]: time="2025-03-17T18:30:45.651213840Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 18:30:45.651248 env[1316]: time="2025-03-17T18:30:45.651233640Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Mar 17 18:30:45.651343 env[1316]: time="2025-03-17T18:30:45.651268680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 18:30:45.651529 env[1316]: time="2025-03-17T18:30:45.651474600Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 18:30:45.654310 env[1316]: time="2025-03-17T18:30:45.651534040Z" level=info msg="Connect containerd service" Mar 17 18:30:45.654310 env[1316]: time="2025-03-17T18:30:45.651566560Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 18:30:45.654310 env[1316]: time="2025-03-17T18:30:45.652140280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:30:45.654310 env[1316]: time="2025-03-17T18:30:45.652524400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 18:30:45.654310 env[1316]: time="2025-03-17T18:30:45.652565920Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 18:30:45.654310 env[1316]: time="2025-03-17T18:30:45.652610160Z" level=info msg="containerd successfully booted in 0.028538s" Mar 17 18:30:45.652709 systemd[1]: Started containerd.service. Mar 17 18:30:45.654759 env[1316]: time="2025-03-17T18:30:45.654720120Z" level=info msg="Start subscribing containerd event" Mar 17 18:30:45.654934 env[1316]: time="2025-03-17T18:30:45.654901200Z" level=info msg="Start recovering state" Mar 17 18:30:45.655007 env[1316]: time="2025-03-17T18:30:45.654988720Z" level=info msg="Start event monitor" Mar 17 18:30:45.655038 env[1316]: time="2025-03-17T18:30:45.655017440Z" level=info msg="Start snapshots syncer" Mar 17 18:30:45.655038 env[1316]: time="2025-03-17T18:30:45.655031800Z" level=info msg="Start cni network conf syncer for default" Mar 17 18:30:45.655075 env[1316]: time="2025-03-17T18:30:45.655039920Z" level=info msg="Start streaming server" Mar 17 18:30:45.681167 locksmithd[1348]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 18:30:45.878512 systemd-networkd[1094]: eth0: Gained IPv6LL Mar 17 18:30:45.881125 systemd[1]: Finished systemd-networkd-wait-online.service. Mar 17 18:30:45.882383 systemd[1]: Reached target network-online.target. Mar 17 18:30:45.884802 systemd[1]: Starting kubelet.service... Mar 17 18:30:45.896055 tar[1308]: linux-arm64/LICENSE Mar 17 18:30:45.896941 tar[1308]: linux-arm64/README.md Mar 17 18:30:45.902612 systemd[1]: Finished prepare-helm.service. Mar 17 18:30:46.374313 systemd[1]: Started kubelet.service. Mar 17 18:30:46.714749 sshd_keygen[1312]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 18:30:46.732146 systemd[1]: Finished sshd-keygen.service. Mar 17 18:30:46.734538 systemd[1]: Starting issuegen.service... Mar 17 18:30:46.738974 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 18:30:46.739156 systemd[1]: Finished issuegen.service. Mar 17 18:30:46.741300 systemd[1]: Starting systemd-user-sessions.service... Mar 17 18:30:46.747052 systemd[1]: Finished systemd-user-sessions.service. Mar 17 18:30:46.749143 systemd[1]: Started getty@tty1.service. Mar 17 18:30:46.751077 systemd[1]: Started serial-getty@ttyAMA0.service. Mar 17 18:30:46.752147 systemd[1]: Reached target getty.target. Mar 17 18:30:46.753058 systemd[1]: Reached target multi-user.target. Mar 17 18:30:46.755105 systemd[1]: Starting systemd-update-utmp-runlevel.service... Mar 17 18:30:46.761077 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Mar 17 18:30:46.761276 systemd[1]: Finished systemd-update-utmp-runlevel.service. Mar 17 18:30:46.762426 systemd[1]: Startup finished in 7.180s (kernel) + 4.614s (userspace) = 11.795s. Mar 17 18:30:46.855667 kubelet[1375]: E0317 18:30:46.855632 1375 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:30:46.857611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:30:46.857746 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:30:48.906063 systemd[1]: Created slice system-sshd.slice. Mar 17 18:30:48.907215 systemd[1]: Started sshd@0-10.0.0.124:22-10.0.0.1:38492.service. Mar 17 18:30:48.946748 sshd[1402]: Accepted publickey for core from 10.0.0.1 port 38492 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:30:48.948549 sshd[1402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:30:48.955838 systemd[1]: Created slice user-500.slice. Mar 17 18:30:48.956725 systemd[1]: Starting user-runtime-dir@500.service... Mar 17 18:30:48.958463 systemd-logind[1300]: New session 1 of user core. Mar 17 18:30:48.964658 systemd[1]: Finished user-runtime-dir@500.service. Mar 17 18:30:48.965736 systemd[1]: Starting user@500.service... Mar 17 18:30:48.968932 (systemd)[1407]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:30:49.026357 systemd[1407]: Queued start job for default target default.target. Mar 17 18:30:49.026544 systemd[1407]: Reached target paths.target. Mar 17 18:30:49.026558 systemd[1407]: Reached target sockets.target. Mar 17 18:30:49.026569 systemd[1407]: Reached target timers.target. Mar 17 18:30:49.026590 systemd[1407]: Reached target basic.target. Mar 17 18:30:49.026682 systemd[1]: Started user@500.service. Mar 17 18:30:49.027358 systemd[1]: Started session-1.scope. Mar 17 18:30:49.027477 systemd[1407]: Reached target default.target. Mar 17 18:30:49.027524 systemd[1407]: Startup finished in 53ms. Mar 17 18:30:49.074899 systemd[1]: Started sshd@1-10.0.0.124:22-10.0.0.1:38494.service. Mar 17 18:30:49.115227 sshd[1416]: Accepted publickey for core from 10.0.0.1 port 38494 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:30:49.116546 sshd[1416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:30:49.120533 systemd[1]: Started session-2.scope. Mar 17 18:30:49.120673 systemd-logind[1300]: New session 2 of user core. Mar 17 18:30:49.171221 sshd[1416]: pam_unix(sshd:session): session closed for user core Mar 17 18:30:49.173298 systemd[1]: Started sshd@2-10.0.0.124:22-10.0.0.1:38498.service. Mar 17 18:30:49.173811 systemd[1]: sshd@1-10.0.0.124:22-10.0.0.1:38494.service: Deactivated successfully. Mar 17 18:30:49.174744 systemd-logind[1300]: Session 2 logged out. Waiting for processes to exit. Mar 17 18:30:49.174788 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 18:30:49.175926 systemd-logind[1300]: Removed session 2. Mar 17 18:30:49.205554 sshd[1421]: Accepted publickey for core from 10.0.0.1 port 38498 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:30:49.206596 sshd[1421]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:30:49.209322 systemd-logind[1300]: New session 3 of user core. Mar 17 18:30:49.210000 systemd[1]: Started session-3.scope. Mar 17 18:30:49.258086 sshd[1421]: pam_unix(sshd:session): session closed for user core Mar 17 18:30:49.259939 systemd[1]: Started sshd@3-10.0.0.124:22-10.0.0.1:38508.service. Mar 17 18:30:49.260336 systemd[1]: sshd@2-10.0.0.124:22-10.0.0.1:38498.service: Deactivated successfully. Mar 17 18:30:49.261194 systemd-logind[1300]: Session 3 logged out. Waiting for processes to exit. Mar 17 18:30:49.261238 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 18:30:49.261879 systemd-logind[1300]: Removed session 3. Mar 17 18:30:49.292834 sshd[1428]: Accepted publickey for core from 10.0.0.1 port 38508 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:30:49.293770 sshd[1428]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:30:49.296474 systemd-logind[1300]: New session 4 of user core. Mar 17 18:30:49.297203 systemd[1]: Started session-4.scope. Mar 17 18:30:49.347956 sshd[1428]: pam_unix(sshd:session): session closed for user core Mar 17 18:30:49.350050 systemd[1]: Started sshd@4-10.0.0.124:22-10.0.0.1:38524.service. Mar 17 18:30:49.350784 systemd[1]: sshd@3-10.0.0.124:22-10.0.0.1:38508.service: Deactivated successfully. Mar 17 18:30:49.351776 systemd-logind[1300]: Session 4 logged out. Waiting for processes to exit. Mar 17 18:30:49.351820 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 18:30:49.352508 systemd-logind[1300]: Removed session 4. Mar 17 18:30:49.384586 sshd[1435]: Accepted publickey for core from 10.0.0.1 port 38524 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:30:49.385662 sshd[1435]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:30:49.388763 systemd-logind[1300]: New session 5 of user core. Mar 17 18:30:49.389541 systemd[1]: Started session-5.scope. Mar 17 18:30:49.450774 sudo[1441]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 18:30:49.451005 sudo[1441]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:30:49.460605 dbus-daemon[1288]: avc: received setenforce notice (enforcing=1) Mar 17 18:30:49.461434 sudo[1441]: pam_unix(sudo:session): session closed for user root Mar 17 18:30:49.463118 sshd[1435]: pam_unix(sshd:session): session closed for user core Mar 17 18:30:49.465828 systemd[1]: Started sshd@5-10.0.0.124:22-10.0.0.1:38530.service. Mar 17 18:30:49.466919 systemd[1]: sshd@4-10.0.0.124:22-10.0.0.1:38524.service: Deactivated successfully. Mar 17 18:30:49.467689 systemd-logind[1300]: Session 5 logged out. Waiting for processes to exit. Mar 17 18:30:49.467737 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 18:30:49.468394 systemd-logind[1300]: Removed session 5. Mar 17 18:30:49.498681 sshd[1443]: Accepted publickey for core from 10.0.0.1 port 38530 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:30:49.499875 sshd[1443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:30:49.503014 systemd-logind[1300]: New session 6 of user core. Mar 17 18:30:49.503786 systemd[1]: Started session-6.scope. Mar 17 18:30:49.555185 sudo[1450]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 18:30:49.555438 sudo[1450]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:30:49.558115 sudo[1450]: pam_unix(sudo:session): session closed for user root Mar 17 18:30:49.562692 sudo[1449]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Mar 17 18:30:49.563141 sudo[1449]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:30:49.572261 systemd[1]: Stopping audit-rules.service... Mar 17 18:30:49.572000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Mar 17 18:30:49.574566 auditctl[1453]: No rules Mar 17 18:30:49.574946 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 18:30:49.575157 systemd[1]: Stopped audit-rules.service. Mar 17 18:30:49.575873 kernel: kauditd_printk_skb: 123 callbacks suppressed Mar 17 18:30:49.575911 kernel: audit: type=1305 audit(1742236249.572:156): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Mar 17 18:30:49.575927 kernel: audit: type=1300 audit(1742236249.572:156): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc1d5010 a2=420 a3=0 items=0 ppid=1 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:49.572000 audit[1453]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdc1d5010 a2=420 a3=0 items=0 ppid=1 pid=1453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:49.576699 systemd[1]: Starting audit-rules.service... Mar 17 18:30:49.579625 kernel: audit: type=1327 audit(1742236249.572:156): proctitle=2F7362696E2F617564697463746C002D44 Mar 17 18:30:49.572000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Mar 17 18:30:49.580848 kernel: audit: type=1131 audit(1742236249.573:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.598582 augenrules[1471]: No rules Mar 17 18:30:49.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.600226 sudo[1449]: pam_unix(sudo:session): session closed for user root Mar 17 18:30:49.599271 systemd[1]: Finished audit-rules.service. Mar 17 18:30:49.599000 audit[1449]: USER_END pid=1449 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.605302 kernel: audit: type=1130 audit(1742236249.599:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.605362 kernel: audit: type=1106 audit(1742236249.599:159): pid=1449 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.599000 audit[1449]: CRED_DISP pid=1449 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.605526 sshd[1443]: pam_unix(sshd:session): session closed for user core Mar 17 18:30:49.607727 systemd[1]: Started sshd@6-10.0.0.124:22-10.0.0.1:38534.service. Mar 17 18:30:49.608259 kernel: audit: type=1104 audit(1742236249.599:160): pid=1449 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.608298 kernel: audit: type=1130 audit(1742236249.607:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.124:22-10.0.0.1:38534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.124:22-10.0.0.1:38534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.609000 audit[1443]: USER_END pid=1443 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:30:49.612670 systemd[1]: sshd@5-10.0.0.124:22-10.0.0.1:38530.service: Deactivated successfully. Mar 17 18:30:49.613292 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 18:30:49.614855 kernel: audit: type=1106 audit(1742236249.609:162): pid=1443 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:30:49.610000 audit[1443]: CRED_DISP pid=1443 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:30:49.618429 kernel: audit: type=1104 audit(1742236249.610:163): pid=1443 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:30:49.611000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.124:22-10.0.0.1:38530 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.618160 systemd-logind[1300]: Session 6 logged out. Waiting for processes to exit. Mar 17 18:30:49.619764 systemd-logind[1300]: Removed session 6. Mar 17 18:30:49.641000 audit[1476]: USER_ACCT pid=1476 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:30:49.641701 sshd[1476]: Accepted publickey for core from 10.0.0.1 port 38534 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:30:49.641000 audit[1476]: CRED_ACQ pid=1476 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:30:49.641000 audit[1476]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffee659fe0 a2=3 a3=1 items=0 ppid=1 pid=1476 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:49.641000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:30:49.642582 sshd[1476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:30:49.646098 systemd[1]: Started session-7.scope. Mar 17 18:30:49.646322 systemd-logind[1300]: New session 7 of user core. Mar 17 18:30:49.649000 audit[1476]: USER_START pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:30:49.650000 audit[1481]: CRED_ACQ pid=1481 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:30:49.695000 audit[1482]: USER_ACCT pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.695726 sudo[1482]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 18:30:49.695000 audit[1482]: CRED_REFR pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.696098 sudo[1482]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Mar 17 18:30:49.697000 audit[1482]: USER_START pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:30:49.748747 systemd[1]: Starting docker.service... Mar 17 18:30:49.828556 env[1494]: time="2025-03-17T18:30:49.828507690Z" level=info msg="Starting up" Mar 17 18:30:49.830005 env[1494]: time="2025-03-17T18:30:49.829968075Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:30:49.830005 env[1494]: time="2025-03-17T18:30:49.829989092Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:30:49.830005 env[1494]: time="2025-03-17T18:30:49.830005765Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:30:49.830098 env[1494]: time="2025-03-17T18:30:49.830016058Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:30:49.831832 env[1494]: time="2025-03-17T18:30:49.831809348Z" level=info msg="parsed scheme: \"unix\"" module=grpc Mar 17 18:30:49.831941 env[1494]: time="2025-03-17T18:30:49.831930206Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Mar 17 18:30:49.832002 env[1494]: time="2025-03-17T18:30:49.831986564Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Mar 17 18:30:49.832052 env[1494]: time="2025-03-17T18:30:49.832040300Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Mar 17 18:30:50.001629 env[1494]: time="2025-03-17T18:30:50.001543429Z" level=warning msg="Your kernel does not support cgroup blkio weight" Mar 17 18:30:50.001915 env[1494]: time="2025-03-17T18:30:50.001895304Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Mar 17 18:30:50.002328 env[1494]: time="2025-03-17T18:30:50.002309972Z" level=info msg="Loading containers: start." Mar 17 18:30:50.052000 audit[1528]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1528 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.052000 audit[1528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff666b100 a2=0 a3=1 items=0 ppid=1494 pid=1528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.052000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Mar 17 18:30:50.054000 audit[1530]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1530 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.054000 audit[1530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffea82ad20 a2=0 a3=1 items=0 ppid=1494 pid=1530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.054000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Mar 17 18:30:50.055000 audit[1532]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.055000 audit[1532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffffbdb4a50 a2=0 a3=1 items=0 ppid=1494 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.055000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Mar 17 18:30:50.057000 audit[1534]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.057000 audit[1534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc7926bb0 a2=0 a3=1 items=0 ppid=1494 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.057000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Mar 17 18:30:50.059000 audit[1536]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.059000 audit[1536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd9e1ffd0 a2=0 a3=1 items=0 ppid=1494 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.059000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Mar 17 18:30:50.088000 audit[1541]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1541 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.088000 audit[1541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffc3241e0 a2=0 a3=1 items=0 ppid=1494 pid=1541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.088000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Mar 17 18:30:50.094000 audit[1543]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.094000 audit[1543]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff2270b70 a2=0 a3=1 items=0 ppid=1494 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.094000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Mar 17 18:30:50.095000 audit[1545]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.095000 audit[1545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffecfdf930 a2=0 a3=1 items=0 ppid=1494 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.095000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Mar 17 18:30:50.097000 audit[1547]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.097000 audit[1547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffc22f0070 a2=0 a3=1 items=0 ppid=1494 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.097000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:30:50.104000 audit[1551]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.104000 audit[1551]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffdeeef920 a2=0 a3=1 items=0 ppid=1494 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.104000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:30:50.117000 audit[1552]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1552 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.117000 audit[1552]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffde8c9580 a2=0 a3=1 items=0 ppid=1494 pid=1552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.117000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:30:50.126422 kernel: Initializing XFRM netlink socket Mar 17 18:30:50.148970 env[1494]: time="2025-03-17T18:30:50.148927898Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Mar 17 18:30:50.163000 audit[1560]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1560 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.163000 audit[1560]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=fffff0057a40 a2=0 a3=1 items=0 ppid=1494 pid=1560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.163000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Mar 17 18:30:50.174000 audit[1563]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.174000 audit[1563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc5546cf0 a2=0 a3=1 items=0 ppid=1494 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.174000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Mar 17 18:30:50.177000 audit[1566]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.177000 audit[1566]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff991b080 a2=0 a3=1 items=0 ppid=1494 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.177000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Mar 17 18:30:50.177000 audit[1568]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.177000 audit[1568]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe9f14ac0 a2=0 a3=1 items=0 ppid=1494 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.177000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Mar 17 18:30:50.179000 audit[1570]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.179000 audit[1570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffd7336f50 a2=0 a3=1 items=0 ppid=1494 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.179000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Mar 17 18:30:50.181000 audit[1572]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.181000 audit[1572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffe1bb8710 a2=0 a3=1 items=0 ppid=1494 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.181000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Mar 17 18:30:50.183000 audit[1574]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.183000 audit[1574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffc6b6f070 a2=0 a3=1 items=0 ppid=1494 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.183000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Mar 17 18:30:50.193000 audit[1577]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1577 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.193000 audit[1577]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=fffffe6e2aa0 a2=0 a3=1 items=0 ppid=1494 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.193000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Mar 17 18:30:50.195000 audit[1579]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.195000 audit[1579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffcb536bd0 a2=0 a3=1 items=0 ppid=1494 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.195000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Mar 17 18:30:50.197000 audit[1581]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.197000 audit[1581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc4548530 a2=0 a3=1 items=0 ppid=1494 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.197000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Mar 17 18:30:50.199000 audit[1583]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.199000 audit[1583]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc0aed9d0 a2=0 a3=1 items=0 ppid=1494 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.199000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Mar 17 18:30:50.201473 systemd-networkd[1094]: docker0: Link UP Mar 17 18:30:50.208000 audit[1587]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.208000 audit[1587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe16bb540 a2=0 a3=1 items=0 ppid=1494 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.208000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:30:50.226000 audit[1588]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1588 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:30:50.226000 audit[1588]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd5564c40 a2=0 a3=1 items=0 ppid=1494 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:30:50.226000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Mar 17 18:30:50.227736 env[1494]: time="2025-03-17T18:30:50.227696540Z" level=info msg="Loading containers: done." Mar 17 18:30:50.249886 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1758938841-merged.mount: Deactivated successfully. Mar 17 18:30:50.253187 env[1494]: time="2025-03-17T18:30:50.253098156Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 17 18:30:50.253277 env[1494]: time="2025-03-17T18:30:50.253258003Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Mar 17 18:30:50.253464 env[1494]: time="2025-03-17T18:30:50.253346659Z" level=info msg="Daemon has completed initialization" Mar 17 18:30:50.270686 systemd[1]: Started docker.service. Mar 17 18:30:50.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:50.273459 env[1494]: time="2025-03-17T18:30:50.273419010Z" level=info msg="API listen on /run/docker.sock" Mar 17 18:30:50.996195 env[1316]: time="2025-03-17T18:30:50.996141823Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 17 18:30:51.547982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1396781833.mount: Deactivated successfully. Mar 17 18:30:52.952429 env[1316]: time="2025-03-17T18:30:52.952357998Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:52.953896 env[1316]: time="2025-03-17T18:30:52.953855929Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:52.955648 env[1316]: time="2025-03-17T18:30:52.955620907Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:52.957801 env[1316]: time="2025-03-17T18:30:52.957777506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:52.958475 env[1316]: time="2025-03-17T18:30:52.958450249Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:fcbef283ab16167d1ca4acb66836af518e9fe445111fbc618fdbe196858f9530\"" Mar 17 18:30:52.967453 env[1316]: time="2025-03-17T18:30:52.967395415Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 17 18:30:54.678613 env[1316]: time="2025-03-17T18:30:54.678554421Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:54.679752 env[1316]: time="2025-03-17T18:30:54.679720267Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:54.681269 env[1316]: time="2025-03-17T18:30:54.681246325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:54.683799 env[1316]: time="2025-03-17T18:30:54.683755472Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:54.684492 env[1316]: time="2025-03-17T18:30:54.684461221Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:9469d949b9e8c03b6cb06af513f683dd2975b57092f3deb2a9e125e0d05188d3\"" Mar 17 18:30:54.693078 env[1316]: time="2025-03-17T18:30:54.693053279Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 17 18:30:55.897404 env[1316]: time="2025-03-17T18:30:55.897325445Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:55.899185 env[1316]: time="2025-03-17T18:30:55.899141939Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:55.901489 env[1316]: time="2025-03-17T18:30:55.901458231Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:55.903111 env[1316]: time="2025-03-17T18:30:55.903069173Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:55.903890 env[1316]: time="2025-03-17T18:30:55.903850025Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:3540cd10f52fac0a58ba43c004c6d3941e2a9f53e06440b982b9c130a72c0213\"" Mar 17 18:30:55.912454 env[1316]: time="2025-03-17T18:30:55.912424143Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 18:30:56.941131 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3834927679.mount: Deactivated successfully. Mar 17 18:30:56.942025 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 18:30:56.942153 systemd[1]: Stopped kubelet.service. Mar 17 18:30:56.948393 kernel: kauditd_printk_skb: 84 callbacks suppressed Mar 17 18:30:56.948481 kernel: audit: type=1130 audit(1742236256.940:198): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:56.948509 kernel: audit: type=1131 audit(1742236256.940:199): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:56.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:56.940000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:56.943492 systemd[1]: Starting kubelet.service... Mar 17 18:30:57.027144 systemd[1]: Started kubelet.service. Mar 17 18:30:57.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:57.030435 kernel: audit: type=1130 audit(1742236257.026:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:30:57.073824 kubelet[1660]: E0317 18:30:57.073777 1660 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 18:30:57.076075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 18:30:57.076214 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 18:30:57.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:30:57.079434 kernel: audit: type=1131 audit(1742236257.075:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:30:57.448753 env[1316]: time="2025-03-17T18:30:57.448709051Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:57.450369 env[1316]: time="2025-03-17T18:30:57.450331861Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:57.451821 env[1316]: time="2025-03-17T18:30:57.451780372Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.11,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:57.453167 env[1316]: time="2025-03-17T18:30:57.453123271Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:57.453581 env[1316]: time="2025-03-17T18:30:57.453539167Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:fe83790bf8a35411788b67fe5f0ce35309056c40530484d516af2ca01375220c\"" Mar 17 18:30:57.464472 env[1316]: time="2025-03-17T18:30:57.464442164Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 17 18:30:58.026185 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3696967253.mount: Deactivated successfully. Mar 17 18:30:58.987995 env[1316]: time="2025-03-17T18:30:58.987947577Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:58.989444 env[1316]: time="2025-03-17T18:30:58.989418779Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:58.991274 env[1316]: time="2025-03-17T18:30:58.991246933Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:58.992879 env[1316]: time="2025-03-17T18:30:58.992854129Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:58.996632 env[1316]: time="2025-03-17T18:30:58.996597390Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Mar 17 18:30:59.005757 env[1316]: time="2025-03-17T18:30:59.005727512Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 17 18:30:59.416223 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3278110038.mount: Deactivated successfully. Mar 17 18:30:59.419813 env[1316]: time="2025-03-17T18:30:59.419775258Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:59.421220 env[1316]: time="2025-03-17T18:30:59.421179211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:59.423106 env[1316]: time="2025-03-17T18:30:59.423080726Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:59.424329 env[1316]: time="2025-03-17T18:30:59.424294523Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:30:59.424954 env[1316]: time="2025-03-17T18:30:59.424923297Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Mar 17 18:30:59.434536 env[1316]: time="2025-03-17T18:30:59.434506997Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 17 18:30:59.991492 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3258829039.mount: Deactivated successfully. Mar 17 18:31:02.581310 env[1316]: time="2025-03-17T18:31:02.581262568Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:02.582736 env[1316]: time="2025-03-17T18:31:02.582712040Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:02.584461 env[1316]: time="2025-03-17T18:31:02.584434989Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:02.586338 env[1316]: time="2025-03-17T18:31:02.586314302Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:02.587271 env[1316]: time="2025-03-17T18:31:02.587243359Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:014faa467e29798aeef733fe6d1a3b5e382688217b053ad23410e6cccd5d22fd\"" Mar 17 18:31:07.154992 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 17 18:31:07.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:07.155155 systemd[1]: Stopped kubelet.service. Mar 17 18:31:07.156527 systemd[1]: Starting kubelet.service... Mar 17 18:31:07.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:07.161839 kernel: audit: type=1130 audit(1742236267.154:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:07.161924 kernel: audit: type=1131 audit(1742236267.154:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:07.167478 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 17 18:31:07.167545 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 17 18:31:07.167807 systemd[1]: Stopped kubelet.service. Mar 17 18:31:07.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:31:07.169865 systemd[1]: Starting kubelet.service... Mar 17 18:31:07.171433 kernel: audit: type=1130 audit(1742236267.167:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Mar 17 18:31:07.185048 systemd[1]: Reloading. Mar 17 18:31:07.233515 /usr/lib/systemd/system-generators/torcx-generator[1792]: time="2025-03-17T18:31:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:31:07.233843 /usr/lib/systemd/system-generators/torcx-generator[1792]: time="2025-03-17T18:31:07Z" level=info msg="torcx already run" Mar 17 18:31:07.301386 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:31:07.301480 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:31:07.316712 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:31:07.386383 systemd[1]: Started kubelet.service. Mar 17 18:31:07.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:07.389425 kernel: audit: type=1130 audit(1742236267.385:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:07.390703 systemd[1]: Stopping kubelet.service... Mar 17 18:31:07.391392 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:31:07.391747 systemd[1]: Stopped kubelet.service. Mar 17 18:31:07.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:07.393942 systemd[1]: Starting kubelet.service... Mar 17 18:31:07.395442 kernel: audit: type=1131 audit(1742236267.390:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:07.474546 systemd[1]: Started kubelet.service. Mar 17 18:31:07.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:07.479989 kernel: audit: type=1130 audit(1742236267.473:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:07.512548 kubelet[1855]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:31:07.512548 kubelet[1855]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:31:07.512548 kubelet[1855]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:31:07.513449 kubelet[1855]: I0317 18:31:07.513386 1855 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:31:08.775449 kubelet[1855]: I0317 18:31:08.775395 1855 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:31:08.775449 kubelet[1855]: I0317 18:31:08.775441 1855 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:31:08.775780 kubelet[1855]: I0317 18:31:08.775635 1855 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:31:08.815052 kubelet[1855]: E0317 18:31:08.815024 1855 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.124:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.124:6443: connect: connection refused Mar 17 18:31:08.816271 kubelet[1855]: I0317 18:31:08.816248 1855 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:31:08.826006 kubelet[1855]: I0317 18:31:08.825990 1855 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:31:08.826610 kubelet[1855]: I0317 18:31:08.826585 1855 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:31:08.826761 kubelet[1855]: I0317 18:31:08.826614 1855 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:31:08.826844 kubelet[1855]: I0317 18:31:08.826834 1855 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:31:08.826870 kubelet[1855]: I0317 18:31:08.826845 1855 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:31:08.827101 kubelet[1855]: I0317 18:31:08.827085 1855 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:31:08.828127 kubelet[1855]: I0317 18:31:08.828104 1855 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:31:08.828127 kubelet[1855]: I0317 18:31:08.828126 1855 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:31:08.828354 kubelet[1855]: I0317 18:31:08.828340 1855 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:31:08.828483 kubelet[1855]: I0317 18:31:08.828464 1855 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:31:08.828795 kubelet[1855]: W0317 18:31:08.828740 1855 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Mar 17 18:31:08.828834 kubelet[1855]: E0317 18:31:08.828805 1855 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.124:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Mar 17 18:31:08.828897 kubelet[1855]: W0317 18:31:08.828848 1855 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Mar 17 18:31:08.828897 kubelet[1855]: E0317 18:31:08.828891 1855 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Mar 17 18:31:08.829436 kubelet[1855]: I0317 18:31:08.829420 1855 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:31:08.829777 kubelet[1855]: I0317 18:31:08.829767 1855 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:31:08.829940 kubelet[1855]: W0317 18:31:08.829930 1855 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 18:31:08.830698 kubelet[1855]: I0317 18:31:08.830676 1855 server.go:1264] "Started kubelet" Mar 17 18:31:08.841044 kubelet[1855]: I0317 18:31:08.840993 1855 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:31:08.841438 kubelet[1855]: I0317 18:31:08.841398 1855 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:31:08.841566 kubelet[1855]: I0317 18:31:08.841544 1855 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:31:08.850000 audit[1855]: AVC avc: denied { mac_admin } for pid=1855 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:08.850846 kubelet[1855]: I0317 18:31:08.850763 1855 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Mar 17 18:31:08.850846 kubelet[1855]: I0317 18:31:08.850795 1855 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Mar 17 18:31:08.850846 kubelet[1855]: I0317 18:31:08.850843 1855 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:31:08.850000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:31:08.854462 kernel: audit: type=1400 audit(1742236268.850:208): avc: denied { mac_admin } for pid=1855 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:08.854520 kernel: audit: type=1401 audit(1742236268.850:208): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:31:08.854538 kernel: audit: type=1300 audit(1742236268.850:208): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a469c0 a1=4000a04870 a2=4000a46990 a3=25 items=0 ppid=1 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.850000 audit[1855]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000a469c0 a1=4000a04870 a2=4000a46990 a3=25 items=0 ppid=1 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.857089 kubelet[1855]: I0317 18:31:08.857059 1855 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:31:08.857880 kernel: audit: type=1327 audit(1742236268.850:208): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:31:08.850000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:31:08.859510 kubelet[1855]: I0317 18:31:08.859476 1855 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:31:08.859577 kubelet[1855]: I0317 18:31:08.859532 1855 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:31:08.850000 audit[1855]: AVC avc: denied { mac_admin } for pid=1855 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:08.850000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:31:08.850000 audit[1855]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40005015c0 a1=4000a04888 a2=4000a46a50 a3=25 items=0 ppid=1 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.850000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:31:08.853000 audit[1867]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:08.853000 audit[1867]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd90ef0e0 a2=0 a3=1 items=0 ppid=1855 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.853000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Mar 17 18:31:08.854000 audit[1868]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1868 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:08.854000 audit[1868]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffb44e240 a2=0 a3=1 items=0 ppid=1855 pid=1868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.854000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Mar 17 18:31:08.858000 audit[1870]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1870 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:08.858000 audit[1870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe43c5bf0 a2=0 a3=1 items=0 ppid=1855 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.858000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:31:08.859000 audit[1872]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1872 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:08.859000 audit[1872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe6d31040 a2=0 a3=1 items=0 ppid=1855 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.859000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:31:08.862203 kubelet[1855]: E0317 18:31:08.862157 1855 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="200ms" Mar 17 18:31:08.862758 kubelet[1855]: I0317 18:31:08.862738 1855 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:31:08.862925 kubelet[1855]: I0317 18:31:08.862906 1855 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:31:08.870932 kubelet[1855]: W0317 18:31:08.870885 1855 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Mar 17 18:31:08.871052 kubelet[1855]: E0317 18:31:08.871037 1855 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Mar 17 18:31:08.871477 kubelet[1855]: E0317 18:31:08.871179 1855 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.124:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.124:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182daaa7de74064b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-17 18:31:08.830656075 +0000 UTC m=+1.353014887,LastTimestamp:2025-03-17 18:31:08.830656075 +0000 UTC m=+1.353014887,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 17 18:31:08.872681 kubelet[1855]: I0317 18:31:08.872661 1855 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:31:08.873042 kubelet[1855]: I0317 18:31:08.873011 1855 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:31:08.878000 audit[1876]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:08.878000 audit[1876]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe42a7290 a2=0 a3=1 items=0 ppid=1855 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.878000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Mar 17 18:31:08.880590 kubelet[1855]: I0317 18:31:08.880529 1855 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:31:08.880000 audit[1879]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1879 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:08.880000 audit[1879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffdb02a30 a2=0 a3=1 items=0 ppid=1855 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Mar 17 18:31:08.881505 kubelet[1855]: I0317 18:31:08.881435 1855 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:31:08.881592 kubelet[1855]: I0317 18:31:08.881575 1855 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:31:08.881624 kubelet[1855]: I0317 18:31:08.881601 1855 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:31:08.881663 kubelet[1855]: E0317 18:31:08.881642 1855 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:31:08.882000 audit[1880]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:08.882000 audit[1880]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe1ee3620 a2=0 a3=1 items=0 ppid=1855 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Mar 17 18:31:08.882000 audit[1881]: NETFILTER_CFG table=nat:33 family=2 entries=1 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:08.882000 audit[1881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe8a509d0 a2=0 a3=1 items=0 ppid=1855 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.882000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Mar 17 18:31:08.883000 audit[1882]: NETFILTER_CFG table=mangle:34 family=10 entries=1 op=nft_register_chain pid=1882 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:08.883000 audit[1882]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffce151f60 a2=0 a3=1 items=0 ppid=1855 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.883000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Mar 17 18:31:08.884000 audit[1883]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:08.884000 audit[1883]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdfe10ce0 a2=0 a3=1 items=0 ppid=1855 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.884000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Mar 17 18:31:08.885343 kubelet[1855]: W0317 18:31:08.885281 1855 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Mar 17 18:31:08.885399 kubelet[1855]: E0317 18:31:08.885344 1855 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.124:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Mar 17 18:31:08.885000 audit[1884]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1884 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:08.885000 audit[1884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffc2031960 a2=0 a3=1 items=0 ppid=1855 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.885000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Mar 17 18:31:08.886000 audit[1885]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1885 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:08.886000 audit[1885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff25ae970 a2=0 a3=1 items=0 ppid=1855 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.886000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Mar 17 18:31:08.893567 kubelet[1855]: I0317 18:31:08.893546 1855 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:31:08.893666 kubelet[1855]: I0317 18:31:08.893653 1855 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:31:08.893722 kubelet[1855]: I0317 18:31:08.893713 1855 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:31:08.895739 kubelet[1855]: I0317 18:31:08.895719 1855 policy_none.go:49] "None policy: Start" Mar 17 18:31:08.896433 kubelet[1855]: I0317 18:31:08.896403 1855 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:31:08.896496 kubelet[1855]: I0317 18:31:08.896464 1855 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:31:08.901346 kubelet[1855]: I0317 18:31:08.901323 1855 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:31:08.900000 audit[1855]: AVC avc: denied { mac_admin } for pid=1855 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:08.900000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:31:08.900000 audit[1855]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000f4d560 a1=4000f96540 a2=4000f4d530 a3=25 items=0 ppid=1 pid=1855 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:08.900000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:31:08.901583 kubelet[1855]: I0317 18:31:08.901401 1855 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Mar 17 18:31:08.901583 kubelet[1855]: I0317 18:31:08.901515 1855 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:31:08.901633 kubelet[1855]: I0317 18:31:08.901609 1855 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:31:08.902969 kubelet[1855]: E0317 18:31:08.902953 1855 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 17 18:31:08.959067 kubelet[1855]: I0317 18:31:08.959047 1855 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:31:08.959389 kubelet[1855]: E0317 18:31:08.959363 1855 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Mar 17 18:31:08.982696 kubelet[1855]: I0317 18:31:08.982659 1855 topology_manager.go:215] "Topology Admit Handler" podUID="53e7813b23671b82968158398f98ebdd" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 18:31:08.983597 kubelet[1855]: I0317 18:31:08.983571 1855 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 18:31:08.984440 kubelet[1855]: I0317 18:31:08.984400 1855 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 18:31:09.062991 kubelet[1855]: E0317 18:31:09.062896 1855 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="400ms" Mar 17 18:31:09.160797 kubelet[1855]: I0317 18:31:09.160769 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:31:09.160878 kubelet[1855]: I0317 18:31:09.160864 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:31:09.160909 kubelet[1855]: I0317 18:31:09.160893 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:31:09.160944 kubelet[1855]: I0317 18:31:09.160934 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53e7813b23671b82968158398f98ebdd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53e7813b23671b82968158398f98ebdd\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:31:09.160971 kubelet[1855]: I0317 18:31:09.160952 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:31:09.160994 kubelet[1855]: I0317 18:31:09.160969 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:31:09.161015 kubelet[1855]: I0317 18:31:09.161006 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:31:09.161037 kubelet[1855]: I0317 18:31:09.161023 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53e7813b23671b82968158398f98ebdd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53e7813b23671b82968158398f98ebdd\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:31:09.161061 kubelet[1855]: I0317 18:31:09.161040 1855 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53e7813b23671b82968158398f98ebdd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53e7813b23671b82968158398f98ebdd\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:31:09.161302 kubelet[1855]: I0317 18:31:09.161281 1855 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:31:09.161717 kubelet[1855]: E0317 18:31:09.161690 1855 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Mar 17 18:31:09.289254 kubelet[1855]: E0317 18:31:09.289220 1855 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:09.289475 kubelet[1855]: E0317 18:31:09.289227 1855 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:09.290260 env[1316]: time="2025-03-17T18:31:09.290222313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53e7813b23671b82968158398f98ebdd,Namespace:kube-system,Attempt:0,}" Mar 17 18:31:09.290577 env[1316]: time="2025-03-17T18:31:09.290290372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 17 18:31:09.291793 kubelet[1855]: E0317 18:31:09.291773 1855 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:09.292250 env[1316]: time="2025-03-17T18:31:09.292219529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 17 18:31:09.463357 kubelet[1855]: E0317 18:31:09.463259 1855 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="800ms" Mar 17 18:31:09.563797 kubelet[1855]: I0317 18:31:09.563753 1855 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:31:09.564119 kubelet[1855]: E0317 18:31:09.564088 1855 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.124:6443/api/v1/nodes\": dial tcp 10.0.0.124:6443: connect: connection refused" node="localhost" Mar 17 18:31:09.893731 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2223796392.mount: Deactivated successfully. Mar 17 18:31:09.899363 env[1316]: time="2025-03-17T18:31:09.899316513Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:09.901050 env[1316]: time="2025-03-17T18:31:09.901021885Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:09.902483 env[1316]: time="2025-03-17T18:31:09.902455463Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:09.903209 env[1316]: time="2025-03-17T18:31:09.903177983Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:09.905343 env[1316]: time="2025-03-17T18:31:09.905307481Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:09.906927 env[1316]: time="2025-03-17T18:31:09.906902139Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:09.908389 env[1316]: time="2025-03-17T18:31:09.908359441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:09.910155 env[1316]: time="2025-03-17T18:31:09.910128758Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:09.912264 env[1316]: time="2025-03-17T18:31:09.912231296Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:09.914001 env[1316]: time="2025-03-17T18:31:09.913963788Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:09.915583 env[1316]: time="2025-03-17T18:31:09.915552693Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:09.917326 env[1316]: time="2025-03-17T18:31:09.917272484Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:09.944860 env[1316]: time="2025-03-17T18:31:09.944786134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:31:09.944860 env[1316]: time="2025-03-17T18:31:09.944825915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:31:09.944860 env[1316]: time="2025-03-17T18:31:09.944836539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:31:09.945084 env[1316]: time="2025-03-17T18:31:09.945024019Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:31:09.945084 env[1316]: time="2025-03-17T18:31:09.945062881Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:31:09.945084 env[1316]: time="2025-03-17T18:31:09.945073585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:31:09.945436 env[1316]: time="2025-03-17T18:31:09.945382962Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/68fefdb1f2c0def8bab1a03993d36042ac3c98660b9a7f89f768216747e9ce4b pid=1904 runtime=io.containerd.runc.v2 Mar 17 18:31:09.945717 env[1316]: time="2025-03-17T18:31:09.945674327Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ebcd19c4f62ed655185d133644cc2c6c36ea3cc0e23b1d7bfdee5d23da8fa0d0 pid=1905 runtime=io.containerd.runc.v2 Mar 17 18:31:09.948522 env[1316]: time="2025-03-17T18:31:09.948392745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:31:09.948522 env[1316]: time="2025-03-17T18:31:09.948487164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:31:09.948522 env[1316]: time="2025-03-17T18:31:09.948503021Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:31:09.948830 env[1316]: time="2025-03-17T18:31:09.948788754Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6e8317c29dd25fce52505f8022f1ed44cb37e6c3cf0e86dd226b17b702e5721a pid=1924 runtime=io.containerd.runc.v2 Mar 17 18:31:10.026678 kubelet[1855]: W0317 18:31:10.026601 1855 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Mar 17 18:31:10.026678 kubelet[1855]: E0317 18:31:10.026678 1855 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.124:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Mar 17 18:31:10.026994 env[1316]: time="2025-03-17T18:31:10.026720606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"68fefdb1f2c0def8bab1a03993d36042ac3c98660b9a7f89f768216747e9ce4b\"" Mar 17 18:31:10.028709 kubelet[1855]: E0317 18:31:10.028680 1855 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:10.031327 env[1316]: time="2025-03-17T18:31:10.031271098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:53e7813b23671b82968158398f98ebdd,Namespace:kube-system,Attempt:0,} returns sandbox id \"ebcd19c4f62ed655185d133644cc2c6c36ea3cc0e23b1d7bfdee5d23da8fa0d0\"" Mar 17 18:31:10.031923 kubelet[1855]: E0317 18:31:10.031782 1855 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:10.032263 env[1316]: time="2025-03-17T18:31:10.032226289Z" level=info msg="CreateContainer within sandbox \"68fefdb1f2c0def8bab1a03993d36042ac3c98660b9a7f89f768216747e9ce4b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 17 18:31:10.033685 env[1316]: time="2025-03-17T18:31:10.033653304Z" level=info msg="CreateContainer within sandbox \"ebcd19c4f62ed655185d133644cc2c6c36ea3cc0e23b1d7bfdee5d23da8fa0d0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 17 18:31:10.038260 env[1316]: time="2025-03-17T18:31:10.038230241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e8317c29dd25fce52505f8022f1ed44cb37e6c3cf0e86dd226b17b702e5721a\"" Mar 17 18:31:10.040116 kubelet[1855]: E0317 18:31:10.039966 1855 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:10.041966 env[1316]: time="2025-03-17T18:31:10.041920258Z" level=info msg="CreateContainer within sandbox \"6e8317c29dd25fce52505f8022f1ed44cb37e6c3cf0e86dd226b17b702e5721a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 17 18:31:10.044181 env[1316]: time="2025-03-17T18:31:10.044144430Z" level=info msg="CreateContainer within sandbox \"68fefdb1f2c0def8bab1a03993d36042ac3c98660b9a7f89f768216747e9ce4b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"b992bc4ee617c590cf1eb428cde4241c3d1568d0f4ee3a4e0cad205cad4684b1\"" Mar 17 18:31:10.044691 env[1316]: time="2025-03-17T18:31:10.044665629Z" level=info msg="StartContainer for \"b992bc4ee617c590cf1eb428cde4241c3d1568d0f4ee3a4e0cad205cad4684b1\"" Mar 17 18:31:10.047331 env[1316]: time="2025-03-17T18:31:10.047291676Z" level=info msg="CreateContainer within sandbox \"ebcd19c4f62ed655185d133644cc2c6c36ea3cc0e23b1d7bfdee5d23da8fa0d0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6e7565623aad02d1b099e132f73acd23639bf761658e37d8b27ec50e251a4ea4\"" Mar 17 18:31:10.047720 env[1316]: time="2025-03-17T18:31:10.047657558Z" level=info msg="StartContainer for \"6e7565623aad02d1b099e132f73acd23639bf761658e37d8b27ec50e251a4ea4\"" Mar 17 18:31:10.055132 env[1316]: time="2025-03-17T18:31:10.055097553Z" level=info msg="CreateContainer within sandbox \"6e8317c29dd25fce52505f8022f1ed44cb37e6c3cf0e86dd226b17b702e5721a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"04e7a9541ef4a51003704de5e693c8618fb9968325829e37109e5d7b6ce21b99\"" Mar 17 18:31:10.055918 env[1316]: time="2025-03-17T18:31:10.055826520Z" level=info msg="StartContainer for \"04e7a9541ef4a51003704de5e693c8618fb9968325829e37109e5d7b6ce21b99\"" Mar 17 18:31:10.124602 env[1316]: time="2025-03-17T18:31:10.124562912Z" level=info msg="StartContainer for \"6e7565623aad02d1b099e132f73acd23639bf761658e37d8b27ec50e251a4ea4\" returns successfully" Mar 17 18:31:10.139851 env[1316]: time="2025-03-17T18:31:10.138538684Z" level=info msg="StartContainer for \"b992bc4ee617c590cf1eb428cde4241c3d1568d0f4ee3a4e0cad205cad4684b1\" returns successfully" Mar 17 18:31:10.163785 env[1316]: time="2025-03-17T18:31:10.159126892Z" level=info msg="StartContainer for \"04e7a9541ef4a51003704de5e693c8618fb9968325829e37109e5d7b6ce21b99\" returns successfully" Mar 17 18:31:10.213510 kubelet[1855]: W0317 18:31:10.211167 1855 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Mar 17 18:31:10.213510 kubelet[1855]: E0317 18:31:10.211245 1855 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.124:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.124:6443: connect: connection refused Mar 17 18:31:10.263918 kubelet[1855]: E0317 18:31:10.263862 1855 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.124:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.124:6443: connect: connection refused" interval="1.6s" Mar 17 18:31:10.365528 kubelet[1855]: I0317 18:31:10.365465 1855 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:31:10.895117 kubelet[1855]: E0317 18:31:10.894730 1855 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:10.895946 kubelet[1855]: E0317 18:31:10.895927 1855 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:10.896906 kubelet[1855]: E0317 18:31:10.896889 1855 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:11.830719 kubelet[1855]: I0317 18:31:11.830675 1855 apiserver.go:52] "Watching apiserver" Mar 17 18:31:11.858713 kubelet[1855]: I0317 18:31:11.858672 1855 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 18:31:11.860286 kubelet[1855]: I0317 18:31:11.860259 1855 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:31:11.903141 kubelet[1855]: E0317 18:31:11.903106 1855 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Mar 17 18:31:11.903764 kubelet[1855]: E0317 18:31:11.903745 1855 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:11.909465 kubelet[1855]: E0317 18:31:11.909433 1855 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Mar 17 18:31:11.909913 kubelet[1855]: E0317 18:31:11.909870 1855 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:13.929633 systemd[1]: Reloading. Mar 17 18:31:13.975525 /usr/lib/systemd/system-generators/torcx-generator[2148]: time="2025-03-17T18:31:13Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" Mar 17 18:31:13.975552 /usr/lib/systemd/system-generators/torcx-generator[2148]: time="2025-03-17T18:31:13Z" level=info msg="torcx already run" Mar 17 18:31:14.038601 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Mar 17 18:31:14.038745 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Mar 17 18:31:14.054040 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 18:31:14.122338 systemd[1]: Stopping kubelet.service... Mar 17 18:31:14.133900 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 18:31:14.134191 systemd[1]: Stopped kubelet.service. Mar 17 18:31:14.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:14.134935 kernel: kauditd_printk_skb: 44 callbacks suppressed Mar 17 18:31:14.134996 kernel: audit: type=1131 audit(1742236274.132:223): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:14.139222 systemd[1]: Starting kubelet.service... Mar 17 18:31:14.221337 systemd[1]: Started kubelet.service. Mar 17 18:31:14.220000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:14.226287 kernel: audit: type=1130 audit(1742236274.220:224): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:14.255878 kubelet[2201]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:31:14.255878 kubelet[2201]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 18:31:14.255878 kubelet[2201]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 18:31:14.256222 kubelet[2201]: I0317 18:31:14.255919 2201 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 18:31:14.260279 kubelet[2201]: I0317 18:31:14.260253 2201 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 18:31:14.260378 kubelet[2201]: I0317 18:31:14.260367 2201 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 18:31:14.260618 kubelet[2201]: I0317 18:31:14.260602 2201 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 18:31:14.261932 kubelet[2201]: I0317 18:31:14.261908 2201 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 17 18:31:14.263576 kubelet[2201]: I0317 18:31:14.263532 2201 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 18:31:14.269221 kubelet[2201]: I0317 18:31:14.269188 2201 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 18:31:14.269634 kubelet[2201]: I0317 18:31:14.269575 2201 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 18:31:14.269794 kubelet[2201]: I0317 18:31:14.269633 2201 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 18:31:14.269874 kubelet[2201]: I0317 18:31:14.269796 2201 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 18:31:14.269874 kubelet[2201]: I0317 18:31:14.269806 2201 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 18:31:14.269874 kubelet[2201]: I0317 18:31:14.269836 2201 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:31:14.269992 kubelet[2201]: I0317 18:31:14.269926 2201 kubelet.go:400] "Attempting to sync node with API server" Mar 17 18:31:14.269992 kubelet[2201]: I0317 18:31:14.269937 2201 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 18:31:14.269992 kubelet[2201]: I0317 18:31:14.269959 2201 kubelet.go:312] "Adding apiserver pod source" Mar 17 18:31:14.269992 kubelet[2201]: I0317 18:31:14.269972 2201 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 18:31:14.270416 kubelet[2201]: I0317 18:31:14.270380 2201 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Mar 17 18:31:14.270614 kubelet[2201]: I0317 18:31:14.270582 2201 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 18:31:14.271098 kubelet[2201]: I0317 18:31:14.271072 2201 server.go:1264] "Started kubelet" Mar 17 18:31:14.270000 audit[2201]: AVC avc: denied { mac_admin } for pid=2201 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:14.273786 kubelet[2201]: I0317 18:31:14.273644 2201 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 18:31:14.273924 kubelet[2201]: I0317 18:31:14.273897 2201 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 18:31:14.273963 kubelet[2201]: I0317 18:31:14.273935 2201 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 18:31:14.270000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:31:14.274819 kubelet[2201]: I0317 18:31:14.274781 2201 server.go:455] "Adding debug handlers to kubelet server" Mar 17 18:31:14.275122 kubelet[2201]: E0317 18:31:14.275078 2201 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 18:31:14.275937 kernel: audit: type=1400 audit(1742236274.270:225): avc: denied { mac_admin } for pid=2201 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:14.280368 kernel: audit: type=1401 audit(1742236274.270:225): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:31:14.280397 kernel: audit: type=1300 audit(1742236274.270:225): arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b78d80 a1=4000c600f0 a2=4000b78d50 a3=25 items=0 ppid=1 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:14.270000 audit[2201]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000b78d80 a1=4000c600f0 a2=4000b78d50 a3=25 items=0 ppid=1 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:14.270000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:31:14.288537 kubelet[2201]: I0317 18:31:14.288494 2201 kubelet.go:1419] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Mar 17 18:31:14.288652 kubelet[2201]: I0317 18:31:14.288637 2201 kubelet.go:1423] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Mar 17 18:31:14.288750 kubelet[2201]: I0317 18:31:14.288739 2201 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 18:31:14.287000 audit[2201]: AVC avc: denied { mac_admin } for pid=2201 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:14.293486 kubelet[2201]: I0317 18:31:14.293463 2201 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 18:31:14.293625 kubelet[2201]: I0317 18:31:14.293612 2201 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 18:31:14.293773 kubelet[2201]: I0317 18:31:14.293760 2201 reconciler.go:26] "Reconciler: start to sync state" Mar 17 18:31:14.295219 kernel: audit: type=1327 audit(1742236274.270:225): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:31:14.295278 kernel: audit: type=1400 audit(1742236274.287:226): avc: denied { mac_admin } for pid=2201 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:14.295297 kernel: audit: type=1401 audit(1742236274.287:226): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:31:14.287000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:31:14.287000 audit[2201]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40007f8f40 a1=4000c60108 a2=4000b78ea0 a3=25 items=0 ppid=1 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:14.297994 kubelet[2201]: I0317 18:31:14.297954 2201 factory.go:221] Registration of the containerd container factory successfully Mar 17 18:31:14.297994 kubelet[2201]: I0317 18:31:14.297978 2201 factory.go:221] Registration of the systemd container factory successfully Mar 17 18:31:14.298075 kubelet[2201]: I0317 18:31:14.298053 2201 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 18:31:14.301023 kernel: audit: type=1300 audit(1742236274.287:226): arch=c00000b7 syscall=5 success=no exit=-22 a0=40007f8f40 a1=4000c60108 a2=4000b78ea0 a3=25 items=0 ppid=1 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:14.287000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:31:14.306089 kernel: audit: type=1327 audit(1742236274.287:226): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:31:14.308695 kubelet[2201]: I0317 18:31:14.308662 2201 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 18:31:14.309641 kubelet[2201]: I0317 18:31:14.309611 2201 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 18:31:14.309707 kubelet[2201]: I0317 18:31:14.309657 2201 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 18:31:14.309707 kubelet[2201]: I0317 18:31:14.309674 2201 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 18:31:14.309750 kubelet[2201]: E0317 18:31:14.309716 2201 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 17 18:31:14.344655 kubelet[2201]: I0317 18:31:14.344577 2201 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 18:31:14.344769 kubelet[2201]: I0317 18:31:14.344661 2201 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 18:31:14.344769 kubelet[2201]: I0317 18:31:14.344684 2201 state_mem.go:36] "Initialized new in-memory state store" Mar 17 18:31:14.344837 kubelet[2201]: I0317 18:31:14.344820 2201 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 17 18:31:14.344871 kubelet[2201]: I0317 18:31:14.344837 2201 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 17 18:31:14.344871 kubelet[2201]: I0317 18:31:14.344856 2201 policy_none.go:49] "None policy: Start" Mar 17 18:31:14.345325 kubelet[2201]: I0317 18:31:14.345308 2201 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 18:31:14.345367 kubelet[2201]: I0317 18:31:14.345336 2201 state_mem.go:35] "Initializing new in-memory state store" Mar 17 18:31:14.345522 kubelet[2201]: I0317 18:31:14.345508 2201 state_mem.go:75] "Updated machine memory state" Mar 17 18:31:14.346666 kubelet[2201]: I0317 18:31:14.346642 2201 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 18:31:14.345000 audit[2201]: AVC avc: denied { mac_admin } for pid=2201 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:14.345000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Mar 17 18:31:14.345000 audit[2201]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40006d37d0 a1=400092dd70 a2=40006d37a0 a3=25 items=0 ppid=1 pid=2201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:14.345000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Mar 17 18:31:14.346846 kubelet[2201]: I0317 18:31:14.346723 2201 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Mar 17 18:31:14.346947 kubelet[2201]: I0317 18:31:14.346902 2201 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 18:31:14.347017 kubelet[2201]: I0317 18:31:14.347001 2201 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 18:31:14.397088 kubelet[2201]: I0317 18:31:14.397059 2201 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 17 18:31:14.403938 kubelet[2201]: I0317 18:31:14.403770 2201 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 17 18:31:14.403938 kubelet[2201]: I0317 18:31:14.403845 2201 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 17 18:31:14.427553 kubelet[2201]: I0317 18:31:14.427509 2201 topology_manager.go:215] "Topology Admit Handler" podUID="53e7813b23671b82968158398f98ebdd" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 17 18:31:14.427719 kubelet[2201]: I0317 18:31:14.427618 2201 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 17 18:31:14.427719 kubelet[2201]: I0317 18:31:14.427659 2201 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 17 18:31:14.495298 kubelet[2201]: I0317 18:31:14.495194 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:31:14.596165 kubelet[2201]: I0317 18:31:14.596113 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/53e7813b23671b82968158398f98ebdd-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"53e7813b23671b82968158398f98ebdd\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:31:14.596165 kubelet[2201]: I0317 18:31:14.596159 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/53e7813b23671b82968158398f98ebdd-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"53e7813b23671b82968158398f98ebdd\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:31:14.596337 kubelet[2201]: I0317 18:31:14.596185 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:31:14.596337 kubelet[2201]: I0317 18:31:14.596203 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:31:14.596337 kubelet[2201]: I0317 18:31:14.596219 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 17 18:31:14.596337 kubelet[2201]: I0317 18:31:14.596233 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/53e7813b23671b82968158398f98ebdd-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"53e7813b23671b82968158398f98ebdd\") " pod="kube-system/kube-apiserver-localhost" Mar 17 18:31:14.596337 kubelet[2201]: I0317 18:31:14.596268 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:31:14.596485 kubelet[2201]: I0317 18:31:14.596285 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 17 18:31:14.733897 kubelet[2201]: E0317 18:31:14.733858 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:14.734051 kubelet[2201]: E0317 18:31:14.733999 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:14.734187 kubelet[2201]: E0317 18:31:14.734165 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:15.270568 kubelet[2201]: I0317 18:31:15.270515 2201 apiserver.go:52] "Watching apiserver" Mar 17 18:31:15.294465 kubelet[2201]: I0317 18:31:15.294430 2201 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 18:31:15.323507 kubelet[2201]: E0317 18:31:15.323396 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:15.324814 kubelet[2201]: E0317 18:31:15.324790 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:15.337322 kubelet[2201]: E0317 18:31:15.337273 2201 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 17 18:31:15.338970 kubelet[2201]: E0317 18:31:15.338942 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:15.350198 kubelet[2201]: I0317 18:31:15.350146 2201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.350114352 podStartE2EDuration="1.350114352s" podCreationTimestamp="2025-03-17 18:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:31:15.34237054 +0000 UTC m=+1.117726730" watchObservedRunningTime="2025-03-17 18:31:15.350114352 +0000 UTC m=+1.125470582" Mar 17 18:31:15.359125 kubelet[2201]: I0317 18:31:15.359072 2201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.359053003 podStartE2EDuration="1.359053003s" podCreationTimestamp="2025-03-17 18:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:31:15.351165208 +0000 UTC m=+1.126521438" watchObservedRunningTime="2025-03-17 18:31:15.359053003 +0000 UTC m=+1.134409233" Mar 17 18:31:15.359352 kubelet[2201]: I0317 18:31:15.359322 2201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.3593142679999999 podStartE2EDuration="1.359314268s" podCreationTimestamp="2025-03-17 18:31:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:31:15.358730419 +0000 UTC m=+1.134086649" watchObservedRunningTime="2025-03-17 18:31:15.359314268 +0000 UTC m=+1.134670537" Mar 17 18:31:16.334794 kubelet[2201]: E0317 18:31:16.334746 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:19.111457 sudo[1482]: pam_unix(sudo:session): session closed for user root Mar 17 18:31:19.110000 audit[1482]: USER_END pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:31:19.110000 audit[1482]: CRED_DISP pid=1482 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Mar 17 18:31:19.113718 sshd[1476]: pam_unix(sshd:session): session closed for user core Mar 17 18:31:19.113000 audit[1476]: USER_END pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:19.113000 audit[1476]: CRED_DISP pid=1476 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:19.116569 systemd[1]: sshd@6-10.0.0.124:22-10.0.0.1:38534.service: Deactivated successfully. Mar 17 18:31:19.115000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.124:22-10.0.0.1:38534 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:19.117300 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 18:31:19.117642 systemd-logind[1300]: Session 7 logged out. Waiting for processes to exit. Mar 17 18:31:19.118265 systemd-logind[1300]: Removed session 7. Mar 17 18:31:20.763185 kubelet[2201]: E0317 18:31:20.763153 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:21.341635 kubelet[2201]: E0317 18:31:21.341586 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:24.246916 kubelet[2201]: E0317 18:31:24.246877 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:24.345876 kubelet[2201]: E0317 18:31:24.345828 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:24.457077 kubelet[2201]: E0317 18:31:24.457043 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:25.346802 kubelet[2201]: E0317 18:31:25.346753 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:29.035017 kubelet[2201]: I0317 18:31:29.034975 2201 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 17 18:31:29.035387 env[1316]: time="2025-03-17T18:31:29.035332021Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 18:31:29.035598 kubelet[2201]: I0317 18:31:29.035522 2201 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 17 18:31:29.590896 kubelet[2201]: I0317 18:31:29.590842 2201 topology_manager.go:215] "Topology Admit Handler" podUID="6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31" podNamespace="kube-system" podName="kube-proxy-62ld6" Mar 17 18:31:29.593401 kubelet[2201]: W0317 18:31:29.593375 2201 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:31:29.593554 kubelet[2201]: E0317 18:31:29.593539 2201 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:31:29.594594 kubelet[2201]: W0317 18:31:29.594574 2201 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:31:29.594705 kubelet[2201]: E0317 18:31:29.594692 2201 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:localhost" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object Mar 17 18:31:29.609926 kubelet[2201]: I0317 18:31:29.609895 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31-xtables-lock\") pod \"kube-proxy-62ld6\" (UID: \"6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31\") " pod="kube-system/kube-proxy-62ld6" Mar 17 18:31:29.610052 kubelet[2201]: I0317 18:31:29.610033 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vmjnp\" (UniqueName: \"kubernetes.io/projected/6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31-kube-api-access-vmjnp\") pod \"kube-proxy-62ld6\" (UID: \"6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31\") " pod="kube-system/kube-proxy-62ld6" Mar 17 18:31:29.610132 kubelet[2201]: I0317 18:31:29.610119 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31-kube-proxy\") pod \"kube-proxy-62ld6\" (UID: \"6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31\") " pod="kube-system/kube-proxy-62ld6" Mar 17 18:31:29.610223 kubelet[2201]: I0317 18:31:29.610209 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31-lib-modules\") pod \"kube-proxy-62ld6\" (UID: \"6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31\") " pod="kube-system/kube-proxy-62ld6" Mar 17 18:31:30.003500 kubelet[2201]: I0317 18:31:30.003245 2201 topology_manager.go:215] "Topology Admit Handler" podUID="b6605daa-7918-48dd-9d8f-215a032913d2" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-nbs8g" Mar 17 18:31:30.113449 kubelet[2201]: I0317 18:31:30.113389 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lq5dn\" (UniqueName: \"kubernetes.io/projected/b6605daa-7918-48dd-9d8f-215a032913d2-kube-api-access-lq5dn\") pod \"tigera-operator-7bc55997bb-nbs8g\" (UID: \"b6605daa-7918-48dd-9d8f-215a032913d2\") " pod="tigera-operator/tigera-operator-7bc55997bb-nbs8g" Mar 17 18:31:30.113449 kubelet[2201]: I0317 18:31:30.113449 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b6605daa-7918-48dd-9d8f-215a032913d2-var-lib-calico\") pod \"tigera-operator-7bc55997bb-nbs8g\" (UID: \"b6605daa-7918-48dd-9d8f-215a032913d2\") " pod="tigera-operator/tigera-operator-7bc55997bb-nbs8g" Mar 17 18:31:30.306781 env[1316]: time="2025-03-17T18:31:30.306672038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-nbs8g,Uid:b6605daa-7918-48dd-9d8f-215a032913d2,Namespace:tigera-operator,Attempt:0,}" Mar 17 18:31:30.321072 env[1316]: time="2025-03-17T18:31:30.321001095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:31:30.322119 env[1316]: time="2025-03-17T18:31:30.322061212Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:31:30.322119 env[1316]: time="2025-03-17T18:31:30.322080252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:31:30.322363 env[1316]: time="2025-03-17T18:31:30.322276059Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a4794796addbd80314535172b07caa18b3f729d5a1b568e6b69e19996427630e pid=2295 runtime=io.containerd.runc.v2 Mar 17 18:31:30.367311 env[1316]: time="2025-03-17T18:31:30.367268220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-nbs8g,Uid:b6605daa-7918-48dd-9d8f-215a032913d2,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a4794796addbd80314535172b07caa18b3f729d5a1b568e6b69e19996427630e\"" Mar 17 18:31:30.369855 env[1316]: time="2025-03-17T18:31:30.369809268Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Mar 17 18:31:30.718168 kubelet[2201]: E0317 18:31:30.718064 2201 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:31:30.718168 kubelet[2201]: E0317 18:31:30.718100 2201 projected.go:200] Error preparing data for projected volume kube-api-access-vmjnp for pod kube-system/kube-proxy-62ld6: failed to sync configmap cache: timed out waiting for the condition Mar 17 18:31:30.718168 kubelet[2201]: E0317 18:31:30.718168 2201 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31-kube-api-access-vmjnp podName:6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31 nodeName:}" failed. No retries permitted until 2025-03-17 18:31:31.218148191 +0000 UTC m=+16.993504381 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vmjnp" (UniqueName: "kubernetes.io/projected/6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31-kube-api-access-vmjnp") pod "kube-proxy-62ld6" (UID: "6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31") : failed to sync configmap cache: timed out waiting for the condition Mar 17 18:31:30.973830 update_engine[1301]: I0317 18:31:30.973459 1301 update_attempter.cc:509] Updating boot flags... Mar 17 18:31:31.395202 kubelet[2201]: E0317 18:31:31.395169 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:31.397192 env[1316]: time="2025-03-17T18:31:31.397142268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62ld6,Uid:6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31,Namespace:kube-system,Attempt:0,}" Mar 17 18:31:31.409491 env[1316]: time="2025-03-17T18:31:31.409423313Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:31:31.409491 env[1316]: time="2025-03-17T18:31:31.409461234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:31:31.409633 env[1316]: time="2025-03-17T18:31:31.409471675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:31:31.409884 env[1316]: time="2025-03-17T18:31:31.409849807Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/48d65b88c55e2effb5a3511922e68cd1e701624eb6df4998f42244f956b6442e pid=2352 runtime=io.containerd.runc.v2 Mar 17 18:31:31.447914 env[1316]: time="2025-03-17T18:31:31.447872781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-62ld6,Uid:6ed91f4a-15b4-4bd2-bf0a-76bca35e8c31,Namespace:kube-system,Attempt:0,} returns sandbox id \"48d65b88c55e2effb5a3511922e68cd1e701624eb6df4998f42244f956b6442e\"" Mar 17 18:31:31.448687 kubelet[2201]: E0317 18:31:31.448488 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:31.451587 env[1316]: time="2025-03-17T18:31:31.451550182Z" level=info msg="CreateContainer within sandbox \"48d65b88c55e2effb5a3511922e68cd1e701624eb6df4998f42244f956b6442e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 18:31:31.464821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1939270024.mount: Deactivated successfully. Mar 17 18:31:31.470825 env[1316]: time="2025-03-17T18:31:31.470781817Z" level=info msg="CreateContainer within sandbox \"48d65b88c55e2effb5a3511922e68cd1e701624eb6df4998f42244f956b6442e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"388e32d853cceb65e6a67d7bdbaa522bf5ef5cb40d053692e1df21f047d597d5\"" Mar 17 18:31:31.479462 env[1316]: time="2025-03-17T18:31:31.475130200Z" level=info msg="StartContainer for \"388e32d853cceb65e6a67d7bdbaa522bf5ef5cb40d053692e1df21f047d597d5\"" Mar 17 18:31:31.679567 env[1316]: time="2025-03-17T18:31:31.679466620Z" level=info msg="StartContainer for \"388e32d853cceb65e6a67d7bdbaa522bf5ef5cb40d053692e1df21f047d597d5\" returns successfully" Mar 17 18:31:31.681000 audit[2445]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.682465 kernel: kauditd_printk_skb: 9 callbacks suppressed Mar 17 18:31:31.682533 kernel: audit: type=1325 audit(1742236291.681:233): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.681000 audit[2445]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffefd502a0 a2=0 a3=1 items=0 ppid=2403 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.688017 kernel: audit: type=1300 audit(1742236291.681:233): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffefd502a0 a2=0 a3=1 items=0 ppid=2403 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.688090 kernel: audit: type=1327 audit(1742236291.681:233): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:31:31.681000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:31:31.681000 audit[2446]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.691900 kernel: audit: type=1325 audit(1742236291.681:234): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2446 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.691977 kernel: audit: type=1300 audit(1742236291.681:234): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff86bf0a0 a2=0 a3=1 items=0 ppid=2403 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.681000 audit[2446]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff86bf0a0 a2=0 a3=1 items=0 ppid=2403 pid=2446 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.681000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:31:31.697335 kernel: audit: type=1327 audit(1742236291.681:234): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Mar 17 18:31:31.682000 audit[2447]: NETFILTER_CFG table=nat:40 family=10 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.699216 kernel: audit: type=1325 audit(1742236291.682:235): table=nat:40 family=10 entries=1 op=nft_register_chain pid=2447 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.682000 audit[2447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6b34f80 a2=0 a3=1 items=0 ppid=2403 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.702800 kernel: audit: type=1300 audit(1742236291.682:235): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6b34f80 a2=0 a3=1 items=0 ppid=2403 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.702882 kernel: audit: type=1327 audit(1742236291.682:235): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Mar 17 18:31:31.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Mar 17 18:31:31.682000 audit[2448]: NETFILTER_CFG table=filter:41 family=10 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.706505 kernel: audit: type=1325 audit(1742236291.682:236): table=filter:41 family=10 entries=1 op=nft_register_chain pid=2448 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.682000 audit[2448]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd093b0c0 a2=0 a3=1 items=0 ppid=2403 pid=2448 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Mar 17 18:31:31.684000 audit[2449]: NETFILTER_CFG table=nat:42 family=2 entries=1 op=nft_register_chain pid=2449 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.684000 audit[2449]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca0de120 a2=0 a3=1 items=0 ppid=2403 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.684000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Mar 17 18:31:31.687000 audit[2450]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.687000 audit[2450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff05aded0 a2=0 a3=1 items=0 ppid=2403 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.687000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Mar 17 18:31:31.788000 audit[2451]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.788000 audit[2451]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc4cfedf0 a2=0 a3=1 items=0 ppid=2403 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.788000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Mar 17 18:31:31.792000 audit[2453]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.792000 audit[2453]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc1034530 a2=0 a3=1 items=0 ppid=2403 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.792000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Mar 17 18:31:31.796000 audit[2456]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.796000 audit[2456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc1fea500 a2=0 a3=1 items=0 ppid=2403 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.796000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Mar 17 18:31:31.797000 audit[2457]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.797000 audit[2457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe7215ff0 a2=0 a3=1 items=0 ppid=2403 pid=2457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.797000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Mar 17 18:31:31.800000 audit[2459]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.800000 audit[2459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe4ee03d0 a2=0 a3=1 items=0 ppid=2403 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Mar 17 18:31:31.801000 audit[2460]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2460 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.801000 audit[2460]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6638b20 a2=0 a3=1 items=0 ppid=2403 pid=2460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.801000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Mar 17 18:31:31.803000 audit[2462]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.803000 audit[2462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff6c08740 a2=0 a3=1 items=0 ppid=2403 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.803000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Mar 17 18:31:31.806000 audit[2465]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.806000 audit[2465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe144ae10 a2=0 a3=1 items=0 ppid=2403 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.806000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Mar 17 18:31:31.807000 audit[2466]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.807000 audit[2466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1673330 a2=0 a3=1 items=0 ppid=2403 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.807000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Mar 17 18:31:31.810000 audit[2468]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.810000 audit[2468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcc533aa0 a2=0 a3=1 items=0 ppid=2403 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.810000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Mar 17 18:31:31.811000 audit[2469]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.811000 audit[2469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9408990 a2=0 a3=1 items=0 ppid=2403 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.811000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Mar 17 18:31:31.813000 audit[2471]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.813000 audit[2471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff644a770 a2=0 a3=1 items=0 ppid=2403 pid=2471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.813000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Mar 17 18:31:31.817000 audit[2474]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.817000 audit[2474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe7bf0490 a2=0 a3=1 items=0 ppid=2403 pid=2474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.817000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Mar 17 18:31:31.820000 audit[2477]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2477 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.820000 audit[2477]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd6be2aa0 a2=0 a3=1 items=0 ppid=2403 pid=2477 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.820000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Mar 17 18:31:31.821000 audit[2478]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.821000 audit[2478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc3560580 a2=0 a3=1 items=0 ppid=2403 pid=2478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.821000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Mar 17 18:31:31.824000 audit[2480]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.824000 audit[2480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc4288980 a2=0 a3=1 items=0 ppid=2403 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.824000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:31:31.827000 audit[2483]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.827000 audit[2483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcd6ed440 a2=0 a3=1 items=0 ppid=2403 pid=2483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.827000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:31:31.828000 audit[2484]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.828000 audit[2484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe5d7ad00 a2=0 a3=1 items=0 ppid=2403 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.828000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Mar 17 18:31:31.830000 audit[2486]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Mar 17 18:31:31.830000 audit[2486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff3f3c910 a2=0 a3=1 items=0 ppid=2403 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.830000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Mar 17 18:31:31.848000 audit[2492]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:31:31.848000 audit[2492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=fffffae042e0 a2=0 a3=1 items=0 ppid=2403 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.848000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:31.858000 audit[2492]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:31:31.858000 audit[2492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=fffffae042e0 a2=0 a3=1 items=0 ppid=2403 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.858000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:31.859000 audit[2496]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.859000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffceba5130 a2=0 a3=1 items=0 ppid=2403 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.859000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Mar 17 18:31:31.863000 audit[2498]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.863000 audit[2498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffff7e8610 a2=0 a3=1 items=0 ppid=2403 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.863000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Mar 17 18:31:31.866000 audit[2501]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.866000 audit[2501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc5b19190 a2=0 a3=1 items=0 ppid=2403 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.866000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Mar 17 18:31:31.867000 audit[2502]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2502 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.867000 audit[2502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd844a80 a2=0 a3=1 items=0 ppid=2403 pid=2502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.867000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Mar 17 18:31:31.869000 audit[2504]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.869000 audit[2504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffeb6e8f70 a2=0 a3=1 items=0 ppid=2403 pid=2504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.869000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Mar 17 18:31:31.870000 audit[2505]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.870000 audit[2505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff423f980 a2=0 a3=1 items=0 ppid=2403 pid=2505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.870000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Mar 17 18:31:31.872000 audit[2507]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.872000 audit[2507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe7b2e430 a2=0 a3=1 items=0 ppid=2403 pid=2507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.872000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Mar 17 18:31:31.876000 audit[2510]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.876000 audit[2510]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe0a2c190 a2=0 a3=1 items=0 ppid=2403 pid=2510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.876000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Mar 17 18:31:31.877000 audit[2511]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.877000 audit[2511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe938ed30 a2=0 a3=1 items=0 ppid=2403 pid=2511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.877000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Mar 17 18:31:31.879000 audit[2513]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2513 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.879000 audit[2513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff55fdef0 a2=0 a3=1 items=0 ppid=2403 pid=2513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.879000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Mar 17 18:31:31.880000 audit[2514]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.880000 audit[2514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc4ab10f0 a2=0 a3=1 items=0 ppid=2403 pid=2514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.880000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Mar 17 18:31:31.882000 audit[2516]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.882000 audit[2516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffee072c10 a2=0 a3=1 items=0 ppid=2403 pid=2516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.882000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Mar 17 18:31:31.886000 audit[2519]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2519 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.886000 audit[2519]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff2c1f8c0 a2=0 a3=1 items=0 ppid=2403 pid=2519 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.886000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Mar 17 18:31:31.889000 audit[2522]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2522 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.889000 audit[2522]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff3451210 a2=0 a3=1 items=0 ppid=2403 pid=2522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.889000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Mar 17 18:31:31.890000 audit[2523]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.890000 audit[2523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffecc264e0 a2=0 a3=1 items=0 ppid=2403 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.890000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Mar 17 18:31:31.892000 audit[2525]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.892000 audit[2525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffe0e0ad10 a2=0 a3=1 items=0 ppid=2403 pid=2525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.892000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:31:31.895000 audit[2528]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2528 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.895000 audit[2528]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffdf922c90 a2=0 a3=1 items=0 ppid=2403 pid=2528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.895000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Mar 17 18:31:31.897000 audit[2529]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.897000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff0a2e5c0 a2=0 a3=1 items=0 ppid=2403 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.897000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Mar 17 18:31:31.899000 audit[2531]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2531 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.899000 audit[2531]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff0ee6e00 a2=0 a3=1 items=0 ppid=2403 pid=2531 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.899000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Mar 17 18:31:31.900000 audit[2532]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.900000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed591c00 a2=0 a3=1 items=0 ppid=2403 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.900000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Mar 17 18:31:31.902000 audit[2534]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2534 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.902000 audit[2534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffde1a8390 a2=0 a3=1 items=0 ppid=2403 pid=2534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.902000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:31:31.905000 audit[2537]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2537 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Mar 17 18:31:31.905000 audit[2537]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe9220250 a2=0 a3=1 items=0 ppid=2403 pid=2537 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.905000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Mar 17 18:31:31.908000 audit[2539]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Mar 17 18:31:31.908000 audit[2539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=ffffe8ee55e0 a2=0 a3=1 items=0 ppid=2403 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.908000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:31.909000 audit[2539]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Mar 17 18:31:31.909000 audit[2539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe8ee55e0 a2=0 a3=1 items=0 ppid=2403 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:31.909000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:32.217149 env[1316]: time="2025-03-17T18:31:32.217105606Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:32.218299 env[1316]: time="2025-03-17T18:31:32.218264123Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:32.219753 env[1316]: time="2025-03-17T18:31:32.219722328Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:32.221011 env[1316]: time="2025-03-17T18:31:32.220972008Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:32.221587 env[1316]: time="2025-03-17T18:31:32.221547946Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Mar 17 18:31:32.223731 env[1316]: time="2025-03-17T18:31:32.223694893Z" level=info msg="CreateContainer within sandbox \"a4794796addbd80314535172b07caa18b3f729d5a1b568e6b69e19996427630e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 17 18:31:32.233086 env[1316]: time="2025-03-17T18:31:32.233036866Z" level=info msg="CreateContainer within sandbox \"a4794796addbd80314535172b07caa18b3f729d5a1b568e6b69e19996427630e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"9d41291f75d70db19547eb2cfb150f3c14badde84bb01b30451c71bd19798c39\"" Mar 17 18:31:32.233873 env[1316]: time="2025-03-17T18:31:32.233833211Z" level=info msg="StartContainer for \"9d41291f75d70db19547eb2cfb150f3c14badde84bb01b30451c71bd19798c39\"" Mar 17 18:31:32.311487 env[1316]: time="2025-03-17T18:31:32.311443567Z" level=info msg="StartContainer for \"9d41291f75d70db19547eb2cfb150f3c14badde84bb01b30451c71bd19798c39\" returns successfully" Mar 17 18:31:32.361377 kubelet[2201]: E0317 18:31:32.361107 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:32.378348 kubelet[2201]: I0317 18:31:32.378040 2201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-nbs8g" podStartSLOduration=1.5239799010000001 podStartE2EDuration="3.378014496s" podCreationTimestamp="2025-03-17 18:31:29 +0000 UTC" firstStartedPulling="2025-03-17 18:31:30.368353017 +0000 UTC m=+16.143709247" lastFinishedPulling="2025-03-17 18:31:32.222387612 +0000 UTC m=+17.997743842" observedRunningTime="2025-03-17 18:31:32.369325264 +0000 UTC m=+18.144681494" watchObservedRunningTime="2025-03-17 18:31:32.378014496 +0000 UTC m=+18.153370726" Mar 17 18:31:32.378348 kubelet[2201]: I0317 18:31:32.378241 2201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-62ld6" podStartSLOduration=3.378236183 podStartE2EDuration="3.378236183s" podCreationTimestamp="2025-03-17 18:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:31:32.377794409 +0000 UTC m=+18.153150639" watchObservedRunningTime="2025-03-17 18:31:32.378236183 +0000 UTC m=+18.153592373" Mar 17 18:31:36.123000 audit[2579]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:31:36.123000 audit[2579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffe22658a0 a2=0 a3=1 items=0 ppid=2403 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:36.123000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:36.128000 audit[2579]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:31:36.128000 audit[2579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe22658a0 a2=0 a3=1 items=0 ppid=2403 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:36.128000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:36.140000 audit[2581]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:31:36.140000 audit[2581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffc2ce99b0 a2=0 a3=1 items=0 ppid=2403 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:36.140000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:36.145000 audit[2581]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2581 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:31:36.145000 audit[2581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc2ce99b0 a2=0 a3=1 items=0 ppid=2403 pid=2581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:36.145000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:36.215702 kubelet[2201]: I0317 18:31:36.215662 2201 topology_manager.go:215] "Topology Admit Handler" podUID="76593602-73b3-40e0-86a3-08209e6c0792" podNamespace="calico-system" podName="calico-typha-5fbbc95945-2vswd" Mar 17 18:31:36.253368 kubelet[2201]: I0317 18:31:36.253176 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/76593602-73b3-40e0-86a3-08209e6c0792-typha-certs\") pod \"calico-typha-5fbbc95945-2vswd\" (UID: \"76593602-73b3-40e0-86a3-08209e6c0792\") " pod="calico-system/calico-typha-5fbbc95945-2vswd" Mar 17 18:31:36.253585 kubelet[2201]: I0317 18:31:36.253498 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfl7p\" (UniqueName: \"kubernetes.io/projected/76593602-73b3-40e0-86a3-08209e6c0792-kube-api-access-tfl7p\") pod \"calico-typha-5fbbc95945-2vswd\" (UID: \"76593602-73b3-40e0-86a3-08209e6c0792\") " pod="calico-system/calico-typha-5fbbc95945-2vswd" Mar 17 18:31:36.253645 kubelet[2201]: I0317 18:31:36.253600 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76593602-73b3-40e0-86a3-08209e6c0792-tigera-ca-bundle\") pod \"calico-typha-5fbbc95945-2vswd\" (UID: \"76593602-73b3-40e0-86a3-08209e6c0792\") " pod="calico-system/calico-typha-5fbbc95945-2vswd" Mar 17 18:31:36.261911 kubelet[2201]: I0317 18:31:36.261863 2201 topology_manager.go:215] "Topology Admit Handler" podUID="d490a816-e242-4d9a-b24d-9f7ce4b516bf" podNamespace="calico-system" podName="calico-node-szs4r" Mar 17 18:31:36.354749 kubelet[2201]: I0317 18:31:36.354711 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d490a816-e242-4d9a-b24d-9f7ce4b516bf-cni-log-dir\") pod \"calico-node-szs4r\" (UID: \"d490a816-e242-4d9a-b24d-9f7ce4b516bf\") " pod="calico-system/calico-node-szs4r" Mar 17 18:31:36.354749 kubelet[2201]: I0317 18:31:36.354750 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d490a816-e242-4d9a-b24d-9f7ce4b516bf-cni-net-dir\") pod \"calico-node-szs4r\" (UID: \"d490a816-e242-4d9a-b24d-9f7ce4b516bf\") " pod="calico-system/calico-node-szs4r" Mar 17 18:31:36.354918 kubelet[2201]: I0317 18:31:36.354789 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d490a816-e242-4d9a-b24d-9f7ce4b516bf-policysync\") pod \"calico-node-szs4r\" (UID: \"d490a816-e242-4d9a-b24d-9f7ce4b516bf\") " pod="calico-system/calico-node-szs4r" Mar 17 18:31:36.354918 kubelet[2201]: I0317 18:31:36.354803 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d490a816-e242-4d9a-b24d-9f7ce4b516bf-var-lib-calico\") pod \"calico-node-szs4r\" (UID: \"d490a816-e242-4d9a-b24d-9f7ce4b516bf\") " pod="calico-system/calico-node-szs4r" Mar 17 18:31:36.354918 kubelet[2201]: I0317 18:31:36.354835 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d490a816-e242-4d9a-b24d-9f7ce4b516bf-flexvol-driver-host\") pod \"calico-node-szs4r\" (UID: \"d490a816-e242-4d9a-b24d-9f7ce4b516bf\") " pod="calico-system/calico-node-szs4r" Mar 17 18:31:36.354918 kubelet[2201]: I0317 18:31:36.354855 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxztp\" (UniqueName: \"kubernetes.io/projected/d490a816-e242-4d9a-b24d-9f7ce4b516bf-kube-api-access-zxztp\") pod \"calico-node-szs4r\" (UID: \"d490a816-e242-4d9a-b24d-9f7ce4b516bf\") " pod="calico-system/calico-node-szs4r" Mar 17 18:31:36.354918 kubelet[2201]: I0317 18:31:36.354873 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d490a816-e242-4d9a-b24d-9f7ce4b516bf-node-certs\") pod \"calico-node-szs4r\" (UID: \"d490a816-e242-4d9a-b24d-9f7ce4b516bf\") " pod="calico-system/calico-node-szs4r" Mar 17 18:31:36.355039 kubelet[2201]: I0317 18:31:36.354900 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d490a816-e242-4d9a-b24d-9f7ce4b516bf-lib-modules\") pod \"calico-node-szs4r\" (UID: \"d490a816-e242-4d9a-b24d-9f7ce4b516bf\") " pod="calico-system/calico-node-szs4r" Mar 17 18:31:36.355039 kubelet[2201]: I0317 18:31:36.354916 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d490a816-e242-4d9a-b24d-9f7ce4b516bf-xtables-lock\") pod \"calico-node-szs4r\" (UID: \"d490a816-e242-4d9a-b24d-9f7ce4b516bf\") " pod="calico-system/calico-node-szs4r" Mar 17 18:31:36.355039 kubelet[2201]: I0317 18:31:36.354934 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d490a816-e242-4d9a-b24d-9f7ce4b516bf-cni-bin-dir\") pod \"calico-node-szs4r\" (UID: \"d490a816-e242-4d9a-b24d-9f7ce4b516bf\") " pod="calico-system/calico-node-szs4r" Mar 17 18:31:36.355039 kubelet[2201]: I0317 18:31:36.354958 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d490a816-e242-4d9a-b24d-9f7ce4b516bf-tigera-ca-bundle\") pod \"calico-node-szs4r\" (UID: \"d490a816-e242-4d9a-b24d-9f7ce4b516bf\") " pod="calico-system/calico-node-szs4r" Mar 17 18:31:36.355039 kubelet[2201]: I0317 18:31:36.354976 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d490a816-e242-4d9a-b24d-9f7ce4b516bf-var-run-calico\") pod \"calico-node-szs4r\" (UID: \"d490a816-e242-4d9a-b24d-9f7ce4b516bf\") " pod="calico-system/calico-node-szs4r" Mar 17 18:31:36.372363 kubelet[2201]: I0317 18:31:36.372303 2201 topology_manager.go:215] "Topology Admit Handler" podUID="69ba96b0-551d-424c-b677-f69ea1cdb260" podNamespace="calico-system" podName="csi-node-driver-4f7cd" Mar 17 18:31:36.372664 kubelet[2201]: E0317 18:31:36.372630 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f7cd" podUID="69ba96b0-551d-424c-b677-f69ea1cdb260" Mar 17 18:31:36.455981 kubelet[2201]: I0317 18:31:36.455863 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/69ba96b0-551d-424c-b677-f69ea1cdb260-registration-dir\") pod \"csi-node-driver-4f7cd\" (UID: \"69ba96b0-551d-424c-b677-f69ea1cdb260\") " pod="calico-system/csi-node-driver-4f7cd" Mar 17 18:31:36.455981 kubelet[2201]: I0317 18:31:36.455905 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrstn\" (UniqueName: \"kubernetes.io/projected/69ba96b0-551d-424c-b677-f69ea1cdb260-kube-api-access-qrstn\") pod \"csi-node-driver-4f7cd\" (UID: \"69ba96b0-551d-424c-b677-f69ea1cdb260\") " pod="calico-system/csi-node-driver-4f7cd" Mar 17 18:31:36.455981 kubelet[2201]: I0317 18:31:36.455935 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/69ba96b0-551d-424c-b677-f69ea1cdb260-varrun\") pod \"csi-node-driver-4f7cd\" (UID: \"69ba96b0-551d-424c-b677-f69ea1cdb260\") " pod="calico-system/csi-node-driver-4f7cd" Mar 17 18:31:36.456152 kubelet[2201]: I0317 18:31:36.456006 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/69ba96b0-551d-424c-b677-f69ea1cdb260-socket-dir\") pod \"csi-node-driver-4f7cd\" (UID: \"69ba96b0-551d-424c-b677-f69ea1cdb260\") " pod="calico-system/csi-node-driver-4f7cd" Mar 17 18:31:36.456152 kubelet[2201]: I0317 18:31:36.456051 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/69ba96b0-551d-424c-b677-f69ea1cdb260-kubelet-dir\") pod \"csi-node-driver-4f7cd\" (UID: \"69ba96b0-551d-424c-b677-f69ea1cdb260\") " pod="calico-system/csi-node-driver-4f7cd" Mar 17 18:31:36.458522 kubelet[2201]: E0317 18:31:36.458492 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.458642 kubelet[2201]: W0317 18:31:36.458624 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.458708 kubelet[2201]: E0317 18:31:36.458695 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.458941 kubelet[2201]: E0317 18:31:36.458928 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.459051 kubelet[2201]: W0317 18:31:36.459036 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.459110 kubelet[2201]: E0317 18:31:36.459098 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.463657 kubelet[2201]: E0317 18:31:36.463639 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.463749 kubelet[2201]: W0317 18:31:36.463736 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.463810 kubelet[2201]: E0317 18:31:36.463798 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.464046 kubelet[2201]: E0317 18:31:36.464031 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.464135 kubelet[2201]: W0317 18:31:36.464122 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.464196 kubelet[2201]: E0317 18:31:36.464184 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.472275 kubelet[2201]: E0317 18:31:36.472252 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.472275 kubelet[2201]: W0317 18:31:36.472272 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.472376 kubelet[2201]: E0317 18:31:36.472288 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.519588 kubelet[2201]: E0317 18:31:36.519559 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:36.520353 env[1316]: time="2025-03-17T18:31:36.519983029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fbbc95945-2vswd,Uid:76593602-73b3-40e0-86a3-08209e6c0792,Namespace:calico-system,Attempt:0,}" Mar 17 18:31:36.534283 env[1316]: time="2025-03-17T18:31:36.534210038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:31:36.534283 env[1316]: time="2025-03-17T18:31:36.534258679Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:31:36.534283 env[1316]: time="2025-03-17T18:31:36.534269560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:31:36.534599 env[1316]: time="2025-03-17T18:31:36.534545407Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/90bb71e09aaae66cf26bd9b5a19e68b450a9081d07d66ef74e5d61c750340d88 pid=2600 runtime=io.containerd.runc.v2 Mar 17 18:31:36.558260 kubelet[2201]: E0317 18:31:36.557350 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.558260 kubelet[2201]: W0317 18:31:36.557371 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.558260 kubelet[2201]: E0317 18:31:36.557390 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.558260 kubelet[2201]: E0317 18:31:36.557579 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.558260 kubelet[2201]: W0317 18:31:36.557588 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.558260 kubelet[2201]: E0317 18:31:36.557602 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.558260 kubelet[2201]: E0317 18:31:36.557761 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.558260 kubelet[2201]: W0317 18:31:36.557772 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.558260 kubelet[2201]: E0317 18:31:36.557786 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.558260 kubelet[2201]: E0317 18:31:36.557962 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.558617 kubelet[2201]: W0317 18:31:36.557971 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.558617 kubelet[2201]: E0317 18:31:36.557984 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.558617 kubelet[2201]: E0317 18:31:36.558141 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.558617 kubelet[2201]: W0317 18:31:36.558149 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.558617 kubelet[2201]: E0317 18:31:36.558162 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.558617 kubelet[2201]: E0317 18:31:36.558327 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.558617 kubelet[2201]: W0317 18:31:36.558336 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.558617 kubelet[2201]: E0317 18:31:36.558347 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.558617 kubelet[2201]: E0317 18:31:36.558496 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.558617 kubelet[2201]: W0317 18:31:36.558504 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.558817 kubelet[2201]: E0317 18:31:36.558514 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.558817 kubelet[2201]: E0317 18:31:36.558645 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.558817 kubelet[2201]: W0317 18:31:36.558652 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.558817 kubelet[2201]: E0317 18:31:36.558690 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.558817 kubelet[2201]: E0317 18:31:36.558770 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.558817 kubelet[2201]: W0317 18:31:36.558778 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.558817 kubelet[2201]: E0317 18:31:36.558798 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.558980 kubelet[2201]: E0317 18:31:36.558892 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.558980 kubelet[2201]: W0317 18:31:36.558899 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.558980 kubelet[2201]: E0317 18:31:36.558974 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.559052 kubelet[2201]: E0317 18:31:36.559032 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.559052 kubelet[2201]: W0317 18:31:36.559038 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.559096 kubelet[2201]: E0317 18:31:36.559080 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.559183 kubelet[2201]: E0317 18:31:36.559163 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.559183 kubelet[2201]: W0317 18:31:36.559178 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.559241 kubelet[2201]: E0317 18:31:36.559196 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.559368 kubelet[2201]: E0317 18:31:36.559335 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.559368 kubelet[2201]: W0317 18:31:36.559346 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.559368 kubelet[2201]: E0317 18:31:36.559361 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.562385 kubelet[2201]: E0317 18:31:36.559508 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.562385 kubelet[2201]: W0317 18:31:36.559518 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.562385 kubelet[2201]: E0317 18:31:36.559532 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.562385 kubelet[2201]: E0317 18:31:36.559721 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.562385 kubelet[2201]: W0317 18:31:36.559731 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.562385 kubelet[2201]: E0317 18:31:36.559742 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.562385 kubelet[2201]: E0317 18:31:36.561346 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.562385 kubelet[2201]: W0317 18:31:36.561360 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.562385 kubelet[2201]: E0317 18:31:36.561376 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.562385 kubelet[2201]: E0317 18:31:36.561868 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.562664 kubelet[2201]: W0317 18:31:36.561886 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.562664 kubelet[2201]: E0317 18:31:36.561939 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.562664 kubelet[2201]: E0317 18:31:36.562094 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.562664 kubelet[2201]: W0317 18:31:36.562104 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.563619 kubelet[2201]: E0317 18:31:36.563547 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.563740 kubelet[2201]: E0317 18:31:36.563728 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.563791 kubelet[2201]: W0317 18:31:36.563740 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.563918 kubelet[2201]: E0317 18:31:36.563853 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.564023 kubelet[2201]: E0317 18:31:36.564010 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.564023 kubelet[2201]: W0317 18:31:36.564021 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.564173 kubelet[2201]: E0317 18:31:36.564111 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.564253 kubelet[2201]: E0317 18:31:36.564239 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.564290 kubelet[2201]: W0317 18:31:36.564253 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.564343 kubelet[2201]: E0317 18:31:36.564328 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.564488 kubelet[2201]: E0317 18:31:36.564476 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.564521 kubelet[2201]: W0317 18:31:36.564489 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.564521 kubelet[2201]: E0317 18:31:36.564500 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.564834 kubelet[2201]: E0317 18:31:36.564815 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:36.565226 kubelet[2201]: E0317 18:31:36.565202 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.565272 kubelet[2201]: W0317 18:31:36.565226 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.565272 kubelet[2201]: E0317 18:31:36.565238 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.565676 kubelet[2201]: E0317 18:31:36.565659 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.565676 kubelet[2201]: W0317 18:31:36.565677 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.565747 kubelet[2201]: E0317 18:31:36.565698 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.565927 kubelet[2201]: E0317 18:31:36.565913 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.565927 kubelet[2201]: W0317 18:31:36.565926 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.566014 kubelet[2201]: E0317 18:31:36.565936 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.568649 env[1316]: time="2025-03-17T18:31:36.568610410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-szs4r,Uid:d490a816-e242-4d9a-b24d-9f7ce4b516bf,Namespace:calico-system,Attempt:0,}" Mar 17 18:31:36.573803 kubelet[2201]: E0317 18:31:36.573781 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:36.573803 kubelet[2201]: W0317 18:31:36.573798 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:36.573920 kubelet[2201]: E0317 18:31:36.573812 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:36.591550 env[1316]: time="2025-03-17T18:31:36.591465243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:31:36.591682 env[1316]: time="2025-03-17T18:31:36.591522924Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:31:36.591682 env[1316]: time="2025-03-17T18:31:36.591533724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:31:36.591891 env[1316]: time="2025-03-17T18:31:36.591805292Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eb9c2f2c1a50765e2b191525071b1bbd52ad669e4481c0203982898bf508d137 pid=2661 runtime=io.containerd.runc.v2 Mar 17 18:31:36.609576 env[1316]: time="2025-03-17T18:31:36.609532391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fbbc95945-2vswd,Uid:76593602-73b3-40e0-86a3-08209e6c0792,Namespace:calico-system,Attempt:0,} returns sandbox id \"90bb71e09aaae66cf26bd9b5a19e68b450a9081d07d66ef74e5d61c750340d88\"" Mar 17 18:31:36.610623 kubelet[2201]: E0317 18:31:36.610309 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:36.612478 env[1316]: time="2025-03-17T18:31:36.611937293Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Mar 17 18:31:36.660103 env[1316]: time="2025-03-17T18:31:36.660063821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-szs4r,Uid:d490a816-e242-4d9a-b24d-9f7ce4b516bf,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb9c2f2c1a50765e2b191525071b1bbd52ad669e4481c0203982898bf508d137\"" Mar 17 18:31:36.661320 kubelet[2201]: E0317 18:31:36.660701 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:37.153000 audit[2702]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:31:37.155090 kernel: kauditd_printk_skb: 155 callbacks suppressed Mar 17 18:31:37.155147 kernel: audit: type=1325 audit(1742236297.153:288): table=filter:93 family=2 entries=17 op=nft_register_rule pid=2702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:31:37.153000 audit[2702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6652 a0=3 a1=ffffe6a661c0 a2=0 a3=1 items=0 ppid=2403 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:37.160636 kernel: audit: type=1300 audit(1742236297.153:288): arch=c00000b7 syscall=211 success=yes exit=6652 a0=3 a1=ffffe6a661c0 a2=0 a3=1 items=0 ppid=2403 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:37.160679 kernel: audit: type=1327 audit(1742236297.153:288): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:37.153000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:37.167000 audit[2702]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:31:37.167000 audit[2702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe6a661c0 a2=0 a3=1 items=0 ppid=2403 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:37.173758 kernel: audit: type=1325 audit(1742236297.167:289): table=nat:94 family=2 entries=12 op=nft_register_rule pid=2702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:31:37.173824 kernel: audit: type=1300 audit(1742236297.167:289): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe6a661c0 a2=0 a3=1 items=0 ppid=2403 pid=2702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:37.173842 kernel: audit: type=1327 audit(1742236297.167:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:37.167000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:38.311611 kubelet[2201]: E0317 18:31:38.310721 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f7cd" podUID="69ba96b0-551d-424c-b677-f69ea1cdb260" Mar 17 18:31:38.367494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3720881054.mount: Deactivated successfully. Mar 17 18:31:38.984388 env[1316]: time="2025-03-17T18:31:38.984333319Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:38.985480 env[1316]: time="2025-03-17T18:31:38.985445466Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:38.986854 env[1316]: time="2025-03-17T18:31:38.986829378Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:38.988352 env[1316]: time="2025-03-17T18:31:38.988323214Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:38.988843 env[1316]: time="2025-03-17T18:31:38.988819866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Mar 17 18:31:38.994549 env[1316]: time="2025-03-17T18:31:38.994515520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Mar 17 18:31:39.018290 env[1316]: time="2025-03-17T18:31:39.018229264Z" level=info msg="CreateContainer within sandbox \"90bb71e09aaae66cf26bd9b5a19e68b450a9081d07d66ef74e5d61c750340d88\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 17 18:31:39.074933 env[1316]: time="2025-03-17T18:31:39.074893668Z" level=info msg="CreateContainer within sandbox \"90bb71e09aaae66cf26bd9b5a19e68b450a9081d07d66ef74e5d61c750340d88\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2f04159f59a1b24149daf067911e805465fd860a181494af239714f7afba6dc6\"" Mar 17 18:31:39.075484 env[1316]: time="2025-03-17T18:31:39.075459201Z" level=info msg="StartContainer for \"2f04159f59a1b24149daf067911e805465fd860a181494af239714f7afba6dc6\"" Mar 17 18:31:39.136439 env[1316]: time="2025-03-17T18:31:39.134975390Z" level=info msg="StartContainer for \"2f04159f59a1b24149daf067911e805465fd860a181494af239714f7afba6dc6\" returns successfully" Mar 17 18:31:39.391307 kubelet[2201]: E0317 18:31:39.391264 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:39.408628 kubelet[2201]: I0317 18:31:39.408553 2201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fbbc95945-2vswd" podStartSLOduration=1.026122247 podStartE2EDuration="3.408537791s" podCreationTimestamp="2025-03-17 18:31:36 +0000 UTC" firstStartedPulling="2025-03-17 18:31:36.611568124 +0000 UTC m=+22.386924354" lastFinishedPulling="2025-03-17 18:31:38.993983668 +0000 UTC m=+24.769339898" observedRunningTime="2025-03-17 18:31:39.408243304 +0000 UTC m=+25.183599534" watchObservedRunningTime="2025-03-17 18:31:39.408537791 +0000 UTC m=+25.183894021" Mar 17 18:31:39.464604 kubelet[2201]: E0317 18:31:39.464567 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.464604 kubelet[2201]: W0317 18:31:39.464591 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.464604 kubelet[2201]: E0317 18:31:39.464612 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.464798 kubelet[2201]: E0317 18:31:39.464766 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.464798 kubelet[2201]: W0317 18:31:39.464775 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.464798 kubelet[2201]: E0317 18:31:39.464783 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.464949 kubelet[2201]: E0317 18:31:39.464922 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.464949 kubelet[2201]: W0317 18:31:39.464935 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.464949 kubelet[2201]: E0317 18:31:39.464943 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.465104 kubelet[2201]: E0317 18:31:39.465084 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.465104 kubelet[2201]: W0317 18:31:39.465096 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.465157 kubelet[2201]: E0317 18:31:39.465105 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.465259 kubelet[2201]: E0317 18:31:39.465241 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.465259 kubelet[2201]: W0317 18:31:39.465253 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.465319 kubelet[2201]: E0317 18:31:39.465261 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.465388 kubelet[2201]: E0317 18:31:39.465378 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.465431 kubelet[2201]: W0317 18:31:39.465388 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.465431 kubelet[2201]: E0317 18:31:39.465397 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.465560 kubelet[2201]: E0317 18:31:39.465541 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.465560 kubelet[2201]: W0317 18:31:39.465553 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.465614 kubelet[2201]: E0317 18:31:39.465561 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.465695 kubelet[2201]: E0317 18:31:39.465686 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.465720 kubelet[2201]: W0317 18:31:39.465696 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.465720 kubelet[2201]: E0317 18:31:39.465703 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.465869 kubelet[2201]: E0317 18:31:39.465838 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.465869 kubelet[2201]: W0317 18:31:39.465863 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.465922 kubelet[2201]: E0317 18:31:39.465871 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.466032 kubelet[2201]: E0317 18:31:39.466020 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.466059 kubelet[2201]: W0317 18:31:39.466032 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.466059 kubelet[2201]: E0317 18:31:39.466040 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.466233 kubelet[2201]: E0317 18:31:39.466222 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.466264 kubelet[2201]: W0317 18:31:39.466233 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.466264 kubelet[2201]: E0317 18:31:39.466242 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.466401 kubelet[2201]: E0317 18:31:39.466390 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.466401 kubelet[2201]: W0317 18:31:39.466400 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.466462 kubelet[2201]: E0317 18:31:39.466423 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.466568 kubelet[2201]: E0317 18:31:39.466558 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.466592 kubelet[2201]: W0317 18:31:39.466568 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.466592 kubelet[2201]: E0317 18:31:39.466577 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.466710 kubelet[2201]: E0317 18:31:39.466700 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.466735 kubelet[2201]: W0317 18:31:39.466710 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.466735 kubelet[2201]: E0317 18:31:39.466717 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.466845 kubelet[2201]: E0317 18:31:39.466833 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.466872 kubelet[2201]: W0317 18:31:39.466846 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.466872 kubelet[2201]: E0317 18:31:39.466856 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.479341 kubelet[2201]: E0317 18:31:39.479315 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.479341 kubelet[2201]: W0317 18:31:39.479334 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.479456 kubelet[2201]: E0317 18:31:39.479349 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.479597 kubelet[2201]: E0317 18:31:39.479570 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.479597 kubelet[2201]: W0317 18:31:39.479587 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.479658 kubelet[2201]: E0317 18:31:39.479603 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.480097 kubelet[2201]: E0317 18:31:39.480074 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.480097 kubelet[2201]: W0317 18:31:39.480091 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.480161 kubelet[2201]: E0317 18:31:39.480109 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.480332 kubelet[2201]: E0317 18:31:39.480307 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.480332 kubelet[2201]: W0317 18:31:39.480324 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.480398 kubelet[2201]: E0317 18:31:39.480340 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.480527 kubelet[2201]: E0317 18:31:39.480515 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.480527 kubelet[2201]: W0317 18:31:39.480525 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.480591 kubelet[2201]: E0317 18:31:39.480538 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.480684 kubelet[2201]: E0317 18:31:39.480670 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.480684 kubelet[2201]: W0317 18:31:39.480679 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.480744 kubelet[2201]: E0317 18:31:39.480688 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.480838 kubelet[2201]: E0317 18:31:39.480827 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.480838 kubelet[2201]: W0317 18:31:39.480836 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.480896 kubelet[2201]: E0317 18:31:39.480878 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.480968 kubelet[2201]: E0317 18:31:39.480959 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.480968 kubelet[2201]: W0317 18:31:39.480968 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.481066 kubelet[2201]: E0317 18:31:39.481050 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.481117 kubelet[2201]: E0317 18:31:39.481105 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.481117 kubelet[2201]: W0317 18:31:39.481115 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.481173 kubelet[2201]: E0317 18:31:39.481127 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.481292 kubelet[2201]: E0317 18:31:39.481275 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.481292 kubelet[2201]: W0317 18:31:39.481285 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.481363 kubelet[2201]: E0317 18:31:39.481296 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.481447 kubelet[2201]: E0317 18:31:39.481437 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.481447 kubelet[2201]: W0317 18:31:39.481446 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.481515 kubelet[2201]: E0317 18:31:39.481457 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.481608 kubelet[2201]: E0317 18:31:39.481599 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.481638 kubelet[2201]: W0317 18:31:39.481608 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.481638 kubelet[2201]: E0317 18:31:39.481620 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.481848 kubelet[2201]: E0317 18:31:39.481835 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.481848 kubelet[2201]: W0317 18:31:39.481848 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.481916 kubelet[2201]: E0317 18:31:39.481862 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.482028 kubelet[2201]: E0317 18:31:39.482017 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.482063 kubelet[2201]: W0317 18:31:39.482029 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.482063 kubelet[2201]: E0317 18:31:39.482043 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.482193 kubelet[2201]: E0317 18:31:39.482183 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.482193 kubelet[2201]: W0317 18:31:39.482193 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.482247 kubelet[2201]: E0317 18:31:39.482205 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.482365 kubelet[2201]: E0317 18:31:39.482349 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.482365 kubelet[2201]: W0317 18:31:39.482360 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.482436 kubelet[2201]: E0317 18:31:39.482374 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.482579 kubelet[2201]: E0317 18:31:39.482562 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.482612 kubelet[2201]: W0317 18:31:39.482580 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.482612 kubelet[2201]: E0317 18:31:39.482592 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:39.482851 kubelet[2201]: E0317 18:31:39.482838 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:39.482884 kubelet[2201]: W0317 18:31:39.482851 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:39.482884 kubelet[2201]: E0317 18:31:39.482862 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.310068 kubelet[2201]: E0317 18:31:40.310022 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f7cd" podUID="69ba96b0-551d-424c-b677-f69ea1cdb260" Mar 17 18:31:40.391435 kubelet[2201]: I0317 18:31:40.391292 2201 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:31:40.391913 kubelet[2201]: E0317 18:31:40.391893 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:40.449836 env[1316]: time="2025-03-17T18:31:40.449779483Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:40.451192 env[1316]: time="2025-03-17T18:31:40.451154113Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:40.456143 env[1316]: time="2025-03-17T18:31:40.456106580Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:40.458880 env[1316]: time="2025-03-17T18:31:40.458627475Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:40.459497 env[1316]: time="2025-03-17T18:31:40.459467173Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Mar 17 18:31:40.463402 env[1316]: time="2025-03-17T18:31:40.463371938Z" level=info msg="CreateContainer within sandbox \"eb9c2f2c1a50765e2b191525071b1bbd52ad669e4481c0203982898bf508d137\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 18:31:40.474897 kubelet[2201]: E0317 18:31:40.474788 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.474897 kubelet[2201]: W0317 18:31:40.474807 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.474897 kubelet[2201]: E0317 18:31:40.474825 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.475230 kubelet[2201]: E0317 18:31:40.475114 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.475230 kubelet[2201]: W0317 18:31:40.475127 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.475230 kubelet[2201]: E0317 18:31:40.475137 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.476449 kubelet[2201]: E0317 18:31:40.475396 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.476449 kubelet[2201]: W0317 18:31:40.475428 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.476449 kubelet[2201]: E0317 18:31:40.475443 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.476768 kubelet[2201]: E0317 18:31:40.476638 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.476768 kubelet[2201]: W0317 18:31:40.476654 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.476768 kubelet[2201]: E0317 18:31:40.476676 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.477062 kubelet[2201]: E0317 18:31:40.476949 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.477062 kubelet[2201]: W0317 18:31:40.476961 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.477062 kubelet[2201]: E0317 18:31:40.476971 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.477371 kubelet[2201]: E0317 18:31:40.477230 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.477371 kubelet[2201]: W0317 18:31:40.477243 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.477371 kubelet[2201]: E0317 18:31:40.477253 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.477720 kubelet[2201]: E0317 18:31:40.477593 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.477720 kubelet[2201]: W0317 18:31:40.477606 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.477720 kubelet[2201]: E0317 18:31:40.477618 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.478031 kubelet[2201]: E0317 18:31:40.477893 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.478031 kubelet[2201]: W0317 18:31:40.477906 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.478031 kubelet[2201]: E0317 18:31:40.477916 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.478301 kubelet[2201]: E0317 18:31:40.478196 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.478301 kubelet[2201]: W0317 18:31:40.478209 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.478301 kubelet[2201]: E0317 18:31:40.478219 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.478671 kubelet[2201]: E0317 18:31:40.478471 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.478671 kubelet[2201]: W0317 18:31:40.478482 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.478671 kubelet[2201]: E0317 18:31:40.478492 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.479054 kubelet[2201]: E0317 18:31:40.478783 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.479054 kubelet[2201]: W0317 18:31:40.478802 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.479054 kubelet[2201]: E0317 18:31:40.478814 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.479294 kubelet[2201]: E0317 18:31:40.479174 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.479294 kubelet[2201]: W0317 18:31:40.479189 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.479294 kubelet[2201]: E0317 18:31:40.479199 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.479671 kubelet[2201]: E0317 18:31:40.479481 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.479671 kubelet[2201]: W0317 18:31:40.479493 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.479671 kubelet[2201]: E0317 18:31:40.479503 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.479951 kubelet[2201]: E0317 18:31:40.479829 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.479951 kubelet[2201]: W0317 18:31:40.479842 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.479951 kubelet[2201]: E0317 18:31:40.479852 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.480213 kubelet[2201]: E0317 18:31:40.480137 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.480213 kubelet[2201]: W0317 18:31:40.480149 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.480213 kubelet[2201]: E0317 18:31:40.480159 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.482192 env[1316]: time="2025-03-17T18:31:40.482133305Z" level=info msg="CreateContainer within sandbox \"eb9c2f2c1a50765e2b191525071b1bbd52ad669e4481c0203982898bf508d137\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"59b00e6e7ef93493ba3b2beb942226cd5342daef979600a42a4552f0ee9ad269\"" Mar 17 18:31:40.484075 env[1316]: time="2025-03-17T18:31:40.482677437Z" level=info msg="StartContainer for \"59b00e6e7ef93493ba3b2beb942226cd5342daef979600a42a4552f0ee9ad269\"" Mar 17 18:31:40.486537 kubelet[2201]: E0317 18:31:40.486485 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.486537 kubelet[2201]: W0317 18:31:40.486502 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.486537 kubelet[2201]: E0317 18:31:40.486515 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.486726 kubelet[2201]: E0317 18:31:40.486703 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.486726 kubelet[2201]: W0317 18:31:40.486716 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.486726 kubelet[2201]: E0317 18:31:40.486725 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.486898 kubelet[2201]: E0317 18:31:40.486868 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.486898 kubelet[2201]: W0317 18:31:40.486882 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.486898 kubelet[2201]: E0317 18:31:40.486892 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.487169 kubelet[2201]: E0317 18:31:40.487150 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.487169 kubelet[2201]: W0317 18:31:40.487165 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.487245 kubelet[2201]: E0317 18:31:40.487179 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.487637 kubelet[2201]: E0317 18:31:40.487490 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.487637 kubelet[2201]: W0317 18:31:40.487535 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.487637 kubelet[2201]: E0317 18:31:40.487568 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.487845 kubelet[2201]: E0317 18:31:40.487823 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.487845 kubelet[2201]: W0317 18:31:40.487837 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.487924 kubelet[2201]: E0317 18:31:40.487851 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.488046 kubelet[2201]: E0317 18:31:40.488025 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.488046 kubelet[2201]: W0317 18:31:40.488036 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.488135 kubelet[2201]: E0317 18:31:40.488109 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.488465 kubelet[2201]: E0317 18:31:40.488448 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.488465 kubelet[2201]: W0317 18:31:40.488461 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.488560 kubelet[2201]: E0317 18:31:40.488545 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.488626 kubelet[2201]: E0317 18:31:40.488611 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.488626 kubelet[2201]: W0317 18:31:40.488621 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.488700 kubelet[2201]: E0317 18:31:40.488687 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.488778 kubelet[2201]: E0317 18:31:40.488766 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.488778 kubelet[2201]: W0317 18:31:40.488776 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.488848 kubelet[2201]: E0317 18:31:40.488786 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.488973 kubelet[2201]: E0317 18:31:40.488958 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.488973 kubelet[2201]: W0317 18:31:40.488970 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.489063 kubelet[2201]: E0317 18:31:40.488983 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.489347 kubelet[2201]: E0317 18:31:40.489334 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.489388 kubelet[2201]: W0317 18:31:40.489347 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.489388 kubelet[2201]: E0317 18:31:40.489361 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.489557 kubelet[2201]: E0317 18:31:40.489545 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.489557 kubelet[2201]: W0317 18:31:40.489557 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.489623 kubelet[2201]: E0317 18:31:40.489566 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.489905 kubelet[2201]: E0317 18:31:40.489892 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.489905 kubelet[2201]: W0317 18:31:40.489904 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.489994 kubelet[2201]: E0317 18:31:40.489976 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.490060 kubelet[2201]: E0317 18:31:40.490046 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.490060 kubelet[2201]: W0317 18:31:40.490055 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.490166 kubelet[2201]: E0317 18:31:40.490064 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.490213 kubelet[2201]: E0317 18:31:40.490196 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.490213 kubelet[2201]: W0317 18:31:40.490209 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.490280 kubelet[2201]: E0317 18:31:40.490218 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.494273 kubelet[2201]: E0317 18:31:40.494248 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.494273 kubelet[2201]: W0317 18:31:40.494266 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.494405 kubelet[2201]: E0317 18:31:40.494285 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.494684 kubelet[2201]: E0317 18:31:40.494669 2201 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 18:31:40.494684 kubelet[2201]: W0317 18:31:40.494681 2201 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 18:31:40.494762 kubelet[2201]: E0317 18:31:40.494691 2201 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 18:31:40.504226 systemd[1]: run-containerd-runc-k8s.io-59b00e6e7ef93493ba3b2beb942226cd5342daef979600a42a4552f0ee9ad269-runc.uNdyhP.mount: Deactivated successfully. Mar 17 18:31:40.558184 env[1316]: time="2025-03-17T18:31:40.558124555Z" level=info msg="StartContainer for \"59b00e6e7ef93493ba3b2beb942226cd5342daef979600a42a4552f0ee9ad269\" returns successfully" Mar 17 18:31:40.629706 env[1316]: time="2025-03-17T18:31:40.629660028Z" level=info msg="shim disconnected" id=59b00e6e7ef93493ba3b2beb942226cd5342daef979600a42a4552f0ee9ad269 Mar 17 18:31:40.629706 env[1316]: time="2025-03-17T18:31:40.629702549Z" level=warning msg="cleaning up after shim disconnected" id=59b00e6e7ef93493ba3b2beb942226cd5342daef979600a42a4552f0ee9ad269 namespace=k8s.io Mar 17 18:31:40.629706 env[1316]: time="2025-03-17T18:31:40.629711630Z" level=info msg="cleaning up dead shim" Mar 17 18:31:40.636097 env[1316]: time="2025-03-17T18:31:40.636055687Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:31:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2858 runtime=io.containerd.runc.v2\n" Mar 17 18:31:41.394753 kubelet[2201]: E0317 18:31:41.394725 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:41.395421 env[1316]: time="2025-03-17T18:31:41.395382342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Mar 17 18:31:41.477422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59b00e6e7ef93493ba3b2beb942226cd5342daef979600a42a4552f0ee9ad269-rootfs.mount: Deactivated successfully. Mar 17 18:31:42.310364 kubelet[2201]: E0317 18:31:42.310326 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f7cd" podUID="69ba96b0-551d-424c-b677-f69ea1cdb260" Mar 17 18:31:42.354310 systemd[1]: Started sshd@7-10.0.0.124:22-10.0.0.1:34972.service. Mar 17 18:31:42.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.124:22-10.0.0.1:34972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:42.358426 kernel: audit: type=1130 audit(1742236302.354:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.124:22-10.0.0.1:34972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:42.389000 audit[2880]: USER_ACCT pid=2880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:42.389943 sshd[2880]: Accepted publickey for core from 10.0.0.1 port 34972 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:31:42.391613 sshd[2880]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:31:42.390000 audit[2880]: CRED_ACQ pid=2880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:42.395880 kernel: audit: type=1101 audit(1742236302.389:291): pid=2880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:42.395960 kernel: audit: type=1103 audit(1742236302.390:292): pid=2880 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:42.397966 kernel: audit: type=1006 audit(1742236302.390:293): pid=2880 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Mar 17 18:31:42.390000 audit[2880]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd586a030 a2=3 a3=1 items=0 ppid=1 pid=2880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:42.390000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:31:42.402967 kernel: audit: type=1300 audit(1742236302.390:293): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd586a030 a2=3 a3=1 items=0 ppid=1 pid=2880 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:42.403013 kernel: audit: type=1327 audit(1742236302.390:293): proctitle=737368643A20636F7265205B707269765D Mar 17 18:31:42.403963 systemd-logind[1300]: New session 8 of user core. Mar 17 18:31:42.404149 systemd[1]: Started session-8.scope. Mar 17 18:31:42.407000 audit[2880]: USER_START pid=2880 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:42.409000 audit[2883]: CRED_ACQ pid=2883 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:42.415694 kernel: audit: type=1105 audit(1742236302.407:294): pid=2880 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:42.415751 kernel: audit: type=1103 audit(1742236302.409:295): pid=2883 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:42.521460 sshd[2880]: pam_unix(sshd:session): session closed for user core Mar 17 18:31:42.521000 audit[2880]: USER_END pid=2880 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:42.524036 systemd[1]: sshd@7-10.0.0.124:22-10.0.0.1:34972.service: Deactivated successfully. Mar 17 18:31:42.524784 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 18:31:42.521000 audit[2880]: CRED_DISP pid=2880 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:42.527035 systemd-logind[1300]: Session 8 logged out. Waiting for processes to exit. Mar 17 18:31:42.527799 systemd-logind[1300]: Removed session 8. Mar 17 18:31:42.528845 kernel: audit: type=1106 audit(1742236302.521:296): pid=2880 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:42.528894 kernel: audit: type=1104 audit(1742236302.521:297): pid=2880 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:42.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.124:22-10.0.0.1:34972 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:44.310297 kubelet[2201]: E0317 18:31:44.310246 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4f7cd" podUID="69ba96b0-551d-424c-b677-f69ea1cdb260" Mar 17 18:31:45.029220 env[1316]: time="2025-03-17T18:31:45.029172797Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:45.031750 env[1316]: time="2025-03-17T18:31:45.031715163Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:45.033581 env[1316]: time="2025-03-17T18:31:45.033554955Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:45.035157 env[1316]: time="2025-03-17T18:31:45.035132503Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:45.035677 env[1316]: time="2025-03-17T18:31:45.035649712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Mar 17 18:31:45.038118 env[1316]: time="2025-03-17T18:31:45.038087876Z" level=info msg="CreateContainer within sandbox \"eb9c2f2c1a50765e2b191525071b1bbd52ad669e4481c0203982898bf508d137\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 18:31:45.048100 env[1316]: time="2025-03-17T18:31:45.048054333Z" level=info msg="CreateContainer within sandbox \"eb9c2f2c1a50765e2b191525071b1bbd52ad669e4481c0203982898bf508d137\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1cbeb517db2a4e14451ed1514a4e31a1f00fe52aaa09c043b544ab749bda2405\"" Mar 17 18:31:45.048963 env[1316]: time="2025-03-17T18:31:45.048613303Z" level=info msg="StartContainer for \"1cbeb517db2a4e14451ed1514a4e31a1f00fe52aaa09c043b544ab749bda2405\"" Mar 17 18:31:45.136585 env[1316]: time="2025-03-17T18:31:45.136543345Z" level=info msg="StartContainer for \"1cbeb517db2a4e14451ed1514a4e31a1f00fe52aaa09c043b544ab749bda2405\" returns successfully" Mar 17 18:31:45.403220 kubelet[2201]: E0317 18:31:45.403183 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:45.729226 env[1316]: time="2025-03-17T18:31:45.729100071Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 18:31:45.745703 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cbeb517db2a4e14451ed1514a4e31a1f00fe52aaa09c043b544ab749bda2405-rootfs.mount: Deactivated successfully. Mar 17 18:31:45.804073 env[1316]: time="2025-03-17T18:31:45.804014082Z" level=info msg="shim disconnected" id=1cbeb517db2a4e14451ed1514a4e31a1f00fe52aaa09c043b544ab749bda2405 Mar 17 18:31:45.804073 env[1316]: time="2025-03-17T18:31:45.804073443Z" level=warning msg="cleaning up after shim disconnected" id=1cbeb517db2a4e14451ed1514a4e31a1f00fe52aaa09c043b544ab749bda2405 namespace=k8s.io Mar 17 18:31:45.804285 env[1316]: time="2025-03-17T18:31:45.804085404Z" level=info msg="cleaning up dead shim" Mar 17 18:31:45.811848 env[1316]: time="2025-03-17T18:31:45.811794340Z" level=warning msg="cleanup warnings time=\"2025-03-17T18:31:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2948 runtime=io.containerd.runc.v2\n" Mar 17 18:31:45.812528 kubelet[2201]: I0317 18:31:45.812476 2201 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 18:31:45.831636 kubelet[2201]: I0317 18:31:45.830533 2201 topology_manager.go:215] "Topology Admit Handler" podUID="5da61114-c0cd-4682-83c2-1d119dc4cf0e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-blwg5" Mar 17 18:31:45.839512 kubelet[2201]: I0317 18:31:45.839481 2201 topology_manager.go:215] "Topology Admit Handler" podUID="4d71cd08-81fa-44dc-85ff-58ab8ef8fce9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rgm7b" Mar 17 18:31:45.839843 kubelet[2201]: I0317 18:31:45.839819 2201 topology_manager.go:215] "Topology Admit Handler" podUID="dae16c91-a8cb-494a-86d1-5ba60d550a02" podNamespace="calico-system" podName="calico-kube-controllers-566884d556-6vd4w" Mar 17 18:31:45.840222 kubelet[2201]: I0317 18:31:45.840192 2201 topology_manager.go:215] "Topology Admit Handler" podUID="30b62e3f-2593-4372-ab8d-126ab81bae75" podNamespace="calico-apiserver" podName="calico-apiserver-f7b5bdcb8-nw9hv" Mar 17 18:31:45.840552 kubelet[2201]: I0317 18:31:45.840528 2201 topology_manager.go:215] "Topology Admit Handler" podUID="a4b94281-bded-4df7-a0d8-c157b33b0138" podNamespace="calico-apiserver" podName="calico-apiserver-f7b5bdcb8-jwwvk" Mar 17 18:31:45.928554 kubelet[2201]: I0317 18:31:45.928513 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dae16c91-a8cb-494a-86d1-5ba60d550a02-tigera-ca-bundle\") pod \"calico-kube-controllers-566884d556-6vd4w\" (UID: \"dae16c91-a8cb-494a-86d1-5ba60d550a02\") " pod="calico-system/calico-kube-controllers-566884d556-6vd4w" Mar 17 18:31:45.928770 kubelet[2201]: I0317 18:31:45.928748 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w4w2\" (UniqueName: \"kubernetes.io/projected/dae16c91-a8cb-494a-86d1-5ba60d550a02-kube-api-access-6w4w2\") pod \"calico-kube-controllers-566884d556-6vd4w\" (UID: \"dae16c91-a8cb-494a-86d1-5ba60d550a02\") " pod="calico-system/calico-kube-controllers-566884d556-6vd4w" Mar 17 18:31:45.928878 kubelet[2201]: I0317 18:31:45.928863 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4d71cd08-81fa-44dc-85ff-58ab8ef8fce9-config-volume\") pod \"coredns-7db6d8ff4d-rgm7b\" (UID: \"4d71cd08-81fa-44dc-85ff-58ab8ef8fce9\") " pod="kube-system/coredns-7db6d8ff4d-rgm7b" Mar 17 18:31:45.928981 kubelet[2201]: I0317 18:31:45.928967 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lqg9\" (UniqueName: \"kubernetes.io/projected/4d71cd08-81fa-44dc-85ff-58ab8ef8fce9-kube-api-access-5lqg9\") pod \"coredns-7db6d8ff4d-rgm7b\" (UID: \"4d71cd08-81fa-44dc-85ff-58ab8ef8fce9\") " pod="kube-system/coredns-7db6d8ff4d-rgm7b" Mar 17 18:31:45.929084 kubelet[2201]: I0317 18:31:45.929068 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cw8lj\" (UniqueName: \"kubernetes.io/projected/30b62e3f-2593-4372-ab8d-126ab81bae75-kube-api-access-cw8lj\") pod \"calico-apiserver-f7b5bdcb8-nw9hv\" (UID: \"30b62e3f-2593-4372-ab8d-126ab81bae75\") " pod="calico-apiserver/calico-apiserver-f7b5bdcb8-nw9hv" Mar 17 18:31:45.929181 kubelet[2201]: I0317 18:31:45.929167 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chssq\" (UniqueName: \"kubernetes.io/projected/5da61114-c0cd-4682-83c2-1d119dc4cf0e-kube-api-access-chssq\") pod \"coredns-7db6d8ff4d-blwg5\" (UID: \"5da61114-c0cd-4682-83c2-1d119dc4cf0e\") " pod="kube-system/coredns-7db6d8ff4d-blwg5" Mar 17 18:31:45.929277 kubelet[2201]: I0317 18:31:45.929263 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fzgzd\" (UniqueName: \"kubernetes.io/projected/a4b94281-bded-4df7-a0d8-c157b33b0138-kube-api-access-fzgzd\") pod \"calico-apiserver-f7b5bdcb8-jwwvk\" (UID: \"a4b94281-bded-4df7-a0d8-c157b33b0138\") " pod="calico-apiserver/calico-apiserver-f7b5bdcb8-jwwvk" Mar 17 18:31:45.929377 kubelet[2201]: I0317 18:31:45.929363 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/30b62e3f-2593-4372-ab8d-126ab81bae75-calico-apiserver-certs\") pod \"calico-apiserver-f7b5bdcb8-nw9hv\" (UID: \"30b62e3f-2593-4372-ab8d-126ab81bae75\") " pod="calico-apiserver/calico-apiserver-f7b5bdcb8-nw9hv" Mar 17 18:31:45.929494 kubelet[2201]: I0317 18:31:45.929478 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5da61114-c0cd-4682-83c2-1d119dc4cf0e-config-volume\") pod \"coredns-7db6d8ff4d-blwg5\" (UID: \"5da61114-c0cd-4682-83c2-1d119dc4cf0e\") " pod="kube-system/coredns-7db6d8ff4d-blwg5" Mar 17 18:31:45.929592 kubelet[2201]: I0317 18:31:45.929577 2201 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a4b94281-bded-4df7-a0d8-c157b33b0138-calico-apiserver-certs\") pod \"calico-apiserver-f7b5bdcb8-jwwvk\" (UID: \"a4b94281-bded-4df7-a0d8-c157b33b0138\") " pod="calico-apiserver/calico-apiserver-f7b5bdcb8-jwwvk" Mar 17 18:31:46.134586 kubelet[2201]: E0317 18:31:46.134543 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:46.135592 env[1316]: time="2025-03-17T18:31:46.135542645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-blwg5,Uid:5da61114-c0cd-4682-83c2-1d119dc4cf0e,Namespace:kube-system,Attempt:0,}" Mar 17 18:31:46.143235 kubelet[2201]: E0317 18:31:46.143206 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:46.146246 env[1316]: time="2025-03-17T18:31:46.145713419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rgm7b,Uid:4d71cd08-81fa-44dc-85ff-58ab8ef8fce9,Namespace:kube-system,Attempt:0,}" Mar 17 18:31:46.149803 env[1316]: time="2025-03-17T18:31:46.149769008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f7b5bdcb8-jwwvk,Uid:a4b94281-bded-4df7-a0d8-c157b33b0138,Namespace:calico-apiserver,Attempt:0,}" Mar 17 18:31:46.153540 env[1316]: time="2025-03-17T18:31:46.153505752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f7b5bdcb8-nw9hv,Uid:30b62e3f-2593-4372-ab8d-126ab81bae75,Namespace:calico-apiserver,Attempt:0,}" Mar 17 18:31:46.153856 env[1316]: time="2025-03-17T18:31:46.153821878Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566884d556-6vd4w,Uid:dae16c91-a8cb-494a-86d1-5ba60d550a02,Namespace:calico-system,Attempt:0,}" Mar 17 18:31:46.319661 env[1316]: time="2025-03-17T18:31:46.319617156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4f7cd,Uid:69ba96b0-551d-424c-b677-f69ea1cdb260,Namespace:calico-system,Attempt:0,}" Mar 17 18:31:46.360856 env[1316]: time="2025-03-17T18:31:46.360774140Z" level=error msg="Failed to destroy network for sandbox \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.361202 env[1316]: time="2025-03-17T18:31:46.361166267Z" level=error msg="encountered an error cleaning up failed sandbox \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.361258 env[1316]: time="2025-03-17T18:31:46.361216628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-blwg5,Uid:5da61114-c0cd-4682-83c2-1d119dc4cf0e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.361883 kubelet[2201]: E0317 18:31:46.361519 2201 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.361883 kubelet[2201]: E0317 18:31:46.361589 2201 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-blwg5" Mar 17 18:31:46.361883 kubelet[2201]: E0317 18:31:46.361608 2201 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-blwg5" Mar 17 18:31:46.362081 kubelet[2201]: E0317 18:31:46.361649 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-blwg5_kube-system(5da61114-c0cd-4682-83c2-1d119dc4cf0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-blwg5_kube-system(5da61114-c0cd-4682-83c2-1d119dc4cf0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-blwg5" podUID="5da61114-c0cd-4682-83c2-1d119dc4cf0e" Mar 17 18:31:46.362253 env[1316]: time="2025-03-17T18:31:46.362213485Z" level=error msg="Failed to destroy network for sandbox \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.362695 env[1316]: time="2025-03-17T18:31:46.362661252Z" level=error msg="encountered an error cleaning up failed sandbox \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.362860 env[1316]: time="2025-03-17T18:31:46.362818735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rgm7b,Uid:4d71cd08-81fa-44dc-85ff-58ab8ef8fce9,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.364869 kubelet[2201]: E0317 18:31:46.364586 2201 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.364869 kubelet[2201]: E0317 18:31:46.364637 2201 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rgm7b" Mar 17 18:31:46.364869 kubelet[2201]: E0317 18:31:46.364654 2201 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-rgm7b" Mar 17 18:31:46.365009 kubelet[2201]: E0317 18:31:46.364686 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rgm7b_kube-system(4d71cd08-81fa-44dc-85ff-58ab8ef8fce9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rgm7b_kube-system(4d71cd08-81fa-44dc-85ff-58ab8ef8fce9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rgm7b" podUID="4d71cd08-81fa-44dc-85ff-58ab8ef8fce9" Mar 17 18:31:46.370431 env[1316]: time="2025-03-17T18:31:46.370369344Z" level=error msg="Failed to destroy network for sandbox \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.370749 env[1316]: time="2025-03-17T18:31:46.370709190Z" level=error msg="encountered an error cleaning up failed sandbox \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.370800 env[1316]: time="2025-03-17T18:31:46.370757511Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566884d556-6vd4w,Uid:dae16c91-a8cb-494a-86d1-5ba60d550a02,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.371257 kubelet[2201]: E0317 18:31:46.370939 2201 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.371257 kubelet[2201]: E0317 18:31:46.370980 2201 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-566884d556-6vd4w" Mar 17 18:31:46.371257 kubelet[2201]: E0317 18:31:46.370999 2201 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-566884d556-6vd4w" Mar 17 18:31:46.371379 env[1316]: time="2025-03-17T18:31:46.370922594Z" level=error msg="Failed to destroy network for sandbox \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.371379 env[1316]: time="2025-03-17T18:31:46.371274160Z" level=error msg="encountered an error cleaning up failed sandbox \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.371379 env[1316]: time="2025-03-17T18:31:46.371312001Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f7b5bdcb8-nw9hv,Uid:30b62e3f-2593-4372-ab8d-126ab81bae75,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.371506 kubelet[2201]: E0317 18:31:46.371026 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-566884d556-6vd4w_calico-system(dae16c91-a8cb-494a-86d1-5ba60d550a02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-566884d556-6vd4w_calico-system(dae16c91-a8cb-494a-86d1-5ba60d550a02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-566884d556-6vd4w" podUID="dae16c91-a8cb-494a-86d1-5ba60d550a02" Mar 17 18:31:46.371760 kubelet[2201]: E0317 18:31:46.371623 2201 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.371760 kubelet[2201]: E0317 18:31:46.371656 2201 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f7b5bdcb8-nw9hv" Mar 17 18:31:46.371760 kubelet[2201]: E0317 18:31:46.371698 2201 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f7b5bdcb8-nw9hv" Mar 17 18:31:46.371863 kubelet[2201]: E0317 18:31:46.371725 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f7b5bdcb8-nw9hv_calico-apiserver(30b62e3f-2593-4372-ab8d-126ab81bae75)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f7b5bdcb8-nw9hv_calico-apiserver(30b62e3f-2593-4372-ab8d-126ab81bae75)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f7b5bdcb8-nw9hv" podUID="30b62e3f-2593-4372-ab8d-126ab81bae75" Mar 17 18:31:46.378707 env[1316]: time="2025-03-17T18:31:46.378653886Z" level=error msg="Failed to destroy network for sandbox \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.379163 env[1316]: time="2025-03-17T18:31:46.379126174Z" level=error msg="encountered an error cleaning up failed sandbox \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.379278 env[1316]: time="2025-03-17T18:31:46.379249416Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f7b5bdcb8-jwwvk,Uid:a4b94281-bded-4df7-a0d8-c157b33b0138,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.379829 kubelet[2201]: E0317 18:31:46.379549 2201 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.379829 kubelet[2201]: E0317 18:31:46.379590 2201 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f7b5bdcb8-jwwvk" Mar 17 18:31:46.379829 kubelet[2201]: E0317 18:31:46.379606 2201 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f7b5bdcb8-jwwvk" Mar 17 18:31:46.379996 kubelet[2201]: E0317 18:31:46.379645 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f7b5bdcb8-jwwvk_calico-apiserver(a4b94281-bded-4df7-a0d8-c157b33b0138)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f7b5bdcb8-jwwvk_calico-apiserver(a4b94281-bded-4df7-a0d8-c157b33b0138)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f7b5bdcb8-jwwvk" podUID="a4b94281-bded-4df7-a0d8-c157b33b0138" Mar 17 18:31:46.403988 env[1316]: time="2025-03-17T18:31:46.402125928Z" level=error msg="Failed to destroy network for sandbox \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.405173 env[1316]: time="2025-03-17T18:31:46.405137260Z" level=error msg="encountered an error cleaning up failed sandbox \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.405478 env[1316]: time="2025-03-17T18:31:46.405444385Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4f7cd,Uid:69ba96b0-551d-424c-b677-f69ea1cdb260,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.405880 kubelet[2201]: I0317 18:31:46.405697 2201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:31:46.407277 kubelet[2201]: E0317 18:31:46.406642 2201 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.407277 kubelet[2201]: E0317 18:31:46.406690 2201 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4f7cd" Mar 17 18:31:46.407277 kubelet[2201]: E0317 18:31:46.406710 2201 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4f7cd" Mar 17 18:31:46.407438 kubelet[2201]: E0317 18:31:46.406747 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4f7cd_calico-system(69ba96b0-551d-424c-b677-f69ea1cdb260)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4f7cd_calico-system(69ba96b0-551d-424c-b677-f69ea1cdb260)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4f7cd" podUID="69ba96b0-551d-424c-b677-f69ea1cdb260" Mar 17 18:31:46.407438 kubelet[2201]: I0317 18:31:46.407368 2201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:31:46.408293 env[1316]: time="2025-03-17T18:31:46.407928587Z" level=info msg="StopPodSandbox for \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\"" Mar 17 18:31:46.408369 env[1316]: time="2025-03-17T18:31:46.408313914Z" level=info msg="StopPodSandbox for \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\"" Mar 17 18:31:46.412892 kubelet[2201]: E0317 18:31:46.412044 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:46.413032 env[1316]: time="2025-03-17T18:31:46.412750990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Mar 17 18:31:46.413926 env[1316]: time="2025-03-17T18:31:46.413868569Z" level=info msg="StopPodSandbox for \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\"" Mar 17 18:31:46.413988 kubelet[2201]: I0317 18:31:46.413222 2201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:31:46.423471 kubelet[2201]: I0317 18:31:46.416764 2201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:31:46.423471 kubelet[2201]: I0317 18:31:46.420680 2201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:31:46.423631 env[1316]: time="2025-03-17T18:31:46.417355869Z" level=info msg="StopPodSandbox for \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\"" Mar 17 18:31:46.423631 env[1316]: time="2025-03-17T18:31:46.421206775Z" level=info msg="StopPodSandbox for \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\"" Mar 17 18:31:46.452487 env[1316]: time="2025-03-17T18:31:46.452428349Z" level=error msg="StopPodSandbox for \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\" failed" error="failed to destroy network for sandbox \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.453817 kubelet[2201]: E0317 18:31:46.453751 2201 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:31:46.453918 kubelet[2201]: E0317 18:31:46.453828 2201 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86"} Mar 17 18:31:46.453918 kubelet[2201]: E0317 18:31:46.453903 2201 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4d71cd08-81fa-44dc-85ff-58ab8ef8fce9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:31:46.454001 kubelet[2201]: E0317 18:31:46.453929 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4d71cd08-81fa-44dc-85ff-58ab8ef8fce9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-rgm7b" podUID="4d71cd08-81fa-44dc-85ff-58ab8ef8fce9" Mar 17 18:31:46.456607 env[1316]: time="2025-03-17T18:31:46.456553740Z" level=error msg="StopPodSandbox for \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\" failed" error="failed to destroy network for sandbox \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.457052 kubelet[2201]: E0317 18:31:46.456916 2201 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:31:46.457052 kubelet[2201]: E0317 18:31:46.456957 2201 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3"} Mar 17 18:31:46.457052 kubelet[2201]: E0317 18:31:46.456987 2201 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a4b94281-bded-4df7-a0d8-c157b33b0138\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:31:46.457052 kubelet[2201]: E0317 18:31:46.457017 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a4b94281-bded-4df7-a0d8-c157b33b0138\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f7b5bdcb8-jwwvk" podUID="a4b94281-bded-4df7-a0d8-c157b33b0138" Mar 17 18:31:46.468893 env[1316]: time="2025-03-17T18:31:46.468802069Z" level=error msg="StopPodSandbox for \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\" failed" error="failed to destroy network for sandbox \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.469572 kubelet[2201]: E0317 18:31:46.469525 2201 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:31:46.469677 kubelet[2201]: E0317 18:31:46.469577 2201 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7"} Mar 17 18:31:46.469677 kubelet[2201]: E0317 18:31:46.469627 2201 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dae16c91-a8cb-494a-86d1-5ba60d550a02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:31:46.469677 kubelet[2201]: E0317 18:31:46.469647 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dae16c91-a8cb-494a-86d1-5ba60d550a02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-566884d556-6vd4w" podUID="dae16c91-a8cb-494a-86d1-5ba60d550a02" Mar 17 18:31:46.475115 env[1316]: time="2025-03-17T18:31:46.475056296Z" level=error msg="StopPodSandbox for \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\" failed" error="failed to destroy network for sandbox \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.475456 kubelet[2201]: E0317 18:31:46.475405 2201 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:31:46.475546 kubelet[2201]: E0317 18:31:46.475461 2201 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd"} Mar 17 18:31:46.475546 kubelet[2201]: E0317 18:31:46.475510 2201 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5da61114-c0cd-4682-83c2-1d119dc4cf0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:31:46.475546 kubelet[2201]: E0317 18:31:46.475532 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5da61114-c0cd-4682-83c2-1d119dc4cf0e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-blwg5" podUID="5da61114-c0cd-4682-83c2-1d119dc4cf0e" Mar 17 18:31:46.477368 env[1316]: time="2025-03-17T18:31:46.477333815Z" level=error msg="StopPodSandbox for \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\" failed" error="failed to destroy network for sandbox \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:46.477720 kubelet[2201]: E0317 18:31:46.477600 2201 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:31:46.477720 kubelet[2201]: E0317 18:31:46.477636 2201 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b"} Mar 17 18:31:46.477720 kubelet[2201]: E0317 18:31:46.477673 2201 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"30b62e3f-2593-4372-ab8d-126ab81bae75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:31:46.477720 kubelet[2201]: E0317 18:31:46.477691 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"30b62e3f-2593-4372-ab8d-126ab81bae75\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f7b5bdcb8-nw9hv" podUID="30b62e3f-2593-4372-ab8d-126ab81bae75" Mar 17 18:31:47.047451 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86-shm.mount: Deactivated successfully. Mar 17 18:31:47.047604 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd-shm.mount: Deactivated successfully. Mar 17 18:31:47.423444 kubelet[2201]: I0317 18:31:47.423094 2201 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:31:47.424274 env[1316]: time="2025-03-17T18:31:47.424093564Z" level=info msg="StopPodSandbox for \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\"" Mar 17 18:31:47.447165 env[1316]: time="2025-03-17T18:31:47.447113184Z" level=error msg="StopPodSandbox for \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\" failed" error="failed to destroy network for sandbox \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 18:31:47.447549 kubelet[2201]: E0317 18:31:47.447494 2201 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:31:47.447634 kubelet[2201]: E0317 18:31:47.447559 2201 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88"} Mar 17 18:31:47.447634 kubelet[2201]: E0317 18:31:47.447591 2201 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"69ba96b0-551d-424c-b677-f69ea1cdb260\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Mar 17 18:31:47.447634 kubelet[2201]: E0317 18:31:47.447621 2201 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"69ba96b0-551d-424c-b677-f69ea1cdb260\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4f7cd" podUID="69ba96b0-551d-424c-b677-f69ea1cdb260" Mar 17 18:31:47.526268 systemd[1]: Started sshd@8-10.0.0.124:22-10.0.0.1:42232.service. Mar 17 18:31:47.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.124:22-10.0.0.1:42232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:47.530371 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:31:47.530464 kernel: audit: type=1130 audit(1742236307.525:299): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.124:22-10.0.0.1:42232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:47.566000 audit[3338]: USER_ACCT pid=3338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:47.568365 sshd[3338]: Accepted publickey for core from 10.0.0.1 port 42232 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:31:47.570010 sshd[3338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:31:47.568000 audit[3338]: CRED_ACQ pid=3338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:47.574602 kernel: audit: type=1101 audit(1742236307.566:300): pid=3338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:47.574667 kernel: audit: type=1103 audit(1742236307.568:301): pid=3338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:47.574687 kernel: audit: type=1006 audit(1742236307.568:302): pid=3338 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Mar 17 18:31:47.574277 systemd-logind[1300]: New session 9 of user core. Mar 17 18:31:47.574780 systemd[1]: Started session-9.scope. Mar 17 18:31:47.580433 kernel: audit: type=1300 audit(1742236307.568:302): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe2d19fa0 a2=3 a3=1 items=0 ppid=1 pid=3338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:47.568000 audit[3338]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe2d19fa0 a2=3 a3=1 items=0 ppid=1 pid=3338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:47.568000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:31:47.589674 kernel: audit: type=1327 audit(1742236307.568:302): proctitle=737368643A20636F7265205B707269765D Mar 17 18:31:47.589000 audit[3338]: USER_START pid=3338 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:47.590000 audit[3341]: CRED_ACQ pid=3341 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:47.599405 kernel: audit: type=1105 audit(1742236307.589:303): pid=3338 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:47.599494 kernel: audit: type=1103 audit(1742236307.590:304): pid=3341 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:47.708404 sshd[3338]: pam_unix(sshd:session): session closed for user core Mar 17 18:31:47.707000 audit[3338]: USER_END pid=3338 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:47.711939 systemd[1]: sshd@8-10.0.0.124:22-10.0.0.1:42232.service: Deactivated successfully. Mar 17 18:31:47.712950 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 18:31:47.712967 systemd-logind[1300]: Session 9 logged out. Waiting for processes to exit. Mar 17 18:31:47.707000 audit[3338]: CRED_DISP pid=3338 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:47.713890 systemd-logind[1300]: Removed session 9. Mar 17 18:31:47.716186 kernel: audit: type=1106 audit(1742236307.707:305): pid=3338 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:47.716240 kernel: audit: type=1104 audit(1742236307.707:306): pid=3338 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:47.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.124:22-10.0.0.1:42232 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:51.879454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540769665.mount: Deactivated successfully. Mar 17 18:31:51.948341 env[1316]: time="2025-03-17T18:31:51.947969294Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:52.049878 env[1316]: time="2025-03-17T18:31:52.049832143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:52.054142 env[1316]: time="2025-03-17T18:31:52.054088923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:52.055785 env[1316]: time="2025-03-17T18:31:52.055753066Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:31:52.056270 env[1316]: time="2025-03-17T18:31:52.056239913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Mar 17 18:31:52.070085 env[1316]: time="2025-03-17T18:31:52.070038626Z" level=info msg="CreateContainer within sandbox \"eb9c2f2c1a50765e2b191525071b1bbd52ad669e4481c0203982898bf508d137\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 18:31:52.091901 env[1316]: time="2025-03-17T18:31:52.091831851Z" level=info msg="CreateContainer within sandbox \"eb9c2f2c1a50765e2b191525071b1bbd52ad669e4481c0203982898bf508d137\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b7e024883cfdbabed82d39239a3e8b8cd5edb77f6d7a4b7a4622a2af43781e3c\"" Mar 17 18:31:52.092372 env[1316]: time="2025-03-17T18:31:52.092346018Z" level=info msg="StartContainer for \"b7e024883cfdbabed82d39239a3e8b8cd5edb77f6d7a4b7a4622a2af43781e3c\"" Mar 17 18:31:52.175878 env[1316]: time="2025-03-17T18:31:52.175773026Z" level=info msg="StartContainer for \"b7e024883cfdbabed82d39239a3e8b8cd5edb77f6d7a4b7a4622a2af43781e3c\" returns successfully" Mar 17 18:31:52.346451 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 18:31:52.346594 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 18:31:52.434785 kubelet[2201]: E0317 18:31:52.434677 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:52.447779 kubelet[2201]: I0317 18:31:52.447723 2201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-szs4r" podStartSLOduration=1.051921961 podStartE2EDuration="16.447708153s" podCreationTimestamp="2025-03-17 18:31:36 +0000 UTC" firstStartedPulling="2025-03-17 18:31:36.661238092 +0000 UTC m=+22.436594322" lastFinishedPulling="2025-03-17 18:31:52.057024284 +0000 UTC m=+37.832380514" observedRunningTime="2025-03-17 18:31:52.447191745 +0000 UTC m=+38.222547975" watchObservedRunningTime="2025-03-17 18:31:52.447708153 +0000 UTC m=+38.223064383" Mar 17 18:31:52.711541 systemd[1]: Started sshd@9-10.0.0.124:22-10.0.0.1:41578.service. Mar 17 18:31:52.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.124:22-10.0.0.1:41578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:52.712743 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:31:52.712820 kernel: audit: type=1130 audit(1742236312.710:308): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.124:22-10.0.0.1:41578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:52.749000 audit[3423]: USER_ACCT pid=3423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.750912 sshd[3423]: Accepted publickey for core from 10.0.0.1 port 41578 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:31:52.752194 sshd[3423]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:31:52.750000 audit[3423]: CRED_ACQ pid=3423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.757054 kernel: audit: type=1101 audit(1742236312.749:309): pid=3423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.757114 kernel: audit: type=1103 audit(1742236312.750:310): pid=3423 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.758778 systemd-logind[1300]: New session 10 of user core. Mar 17 18:31:52.759368 kernel: audit: type=1006 audit(1742236312.750:311): pid=3423 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Mar 17 18:31:52.759399 kernel: audit: type=1300 audit(1742236312.750:311): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1b512b0 a2=3 a3=1 items=0 ppid=1 pid=3423 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:52.750000 audit[3423]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1b512b0 a2=3 a3=1 items=0 ppid=1 pid=3423 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:52.759614 systemd[1]: Started session-10.scope. Mar 17 18:31:52.762869 kernel: audit: type=1327 audit(1742236312.750:311): proctitle=737368643A20636F7265205B707269765D Mar 17 18:31:52.750000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:31:52.764398 kernel: audit: type=1105 audit(1742236312.762:312): pid=3423 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.762000 audit[3423]: USER_START pid=3423 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.766000 audit[3426]: CRED_ACQ pid=3426 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.770641 kernel: audit: type=1103 audit(1742236312.766:313): pid=3426 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.904623 sshd[3423]: pam_unix(sshd:session): session closed for user core Mar 17 18:31:52.905000 audit[3423]: USER_END pid=3423 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.906545 systemd[1]: Started sshd@10-10.0.0.124:22-10.0.0.1:41590.service. Mar 17 18:31:52.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.124:22-10.0.0.1:41590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:52.909532 systemd[1]: sshd@9-10.0.0.124:22-10.0.0.1:41578.service: Deactivated successfully. Mar 17 18:31:52.910387 systemd-logind[1300]: Session 10 logged out. Waiting for processes to exit. Mar 17 18:31:52.910470 systemd[1]: session-10.scope: Deactivated successfully. Mar 17 18:31:52.911436 systemd-logind[1300]: Removed session 10. Mar 17 18:31:52.913106 kernel: audit: type=1106 audit(1742236312.905:314): pid=3423 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.913168 kernel: audit: type=1130 audit(1742236312.905:315): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.124:22-10.0.0.1:41590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:52.905000 audit[3423]: CRED_DISP pid=3423 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.124:22-10.0.0.1:41578 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:52.939000 audit[3437]: USER_ACCT pid=3437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.941175 sshd[3437]: Accepted publickey for core from 10.0.0.1 port 41590 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:31:52.940000 audit[3437]: CRED_ACQ pid=3437 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.940000 audit[3437]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffed19ac50 a2=3 a3=1 items=0 ppid=1 pid=3437 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:52.940000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:31:52.942373 sshd[3437]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:31:52.946053 systemd-logind[1300]: New session 11 of user core. Mar 17 18:31:52.946492 systemd[1]: Started session-11.scope. Mar 17 18:31:52.949000 audit[3437]: USER_START pid=3437 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:52.950000 audit[3442]: CRED_ACQ pid=3442 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:53.105579 sshd[3437]: pam_unix(sshd:session): session closed for user core Mar 17 18:31:53.106299 systemd[1]: Started sshd@11-10.0.0.124:22-10.0.0.1:41592.service. Mar 17 18:31:53.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.124:22-10.0.0.1:41592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:53.106000 audit[3437]: USER_END pid=3437 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:53.106000 audit[3437]: CRED_DISP pid=3437 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:53.109765 systemd-logind[1300]: Session 11 logged out. Waiting for processes to exit. Mar 17 18:31:53.109979 systemd[1]: sshd@10-10.0.0.124:22-10.0.0.1:41590.service: Deactivated successfully. Mar 17 18:31:53.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.124:22-10.0.0.1:41590 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:53.110908 systemd[1]: session-11.scope: Deactivated successfully. Mar 17 18:31:53.111342 systemd-logind[1300]: Removed session 11. Mar 17 18:31:53.155000 audit[3449]: USER_ACCT pid=3449 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:53.156648 sshd[3449]: Accepted publickey for core from 10.0.0.1 port 41592 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:31:53.156000 audit[3449]: CRED_ACQ pid=3449 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:53.156000 audit[3449]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffdb7f3950 a2=3 a3=1 items=0 ppid=1 pid=3449 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:53.156000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:31:53.157856 sshd[3449]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:31:53.161268 systemd-logind[1300]: New session 12 of user core. Mar 17 18:31:53.162064 systemd[1]: Started session-12.scope. Mar 17 18:31:53.164000 audit[3449]: USER_START pid=3449 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:53.165000 audit[3454]: CRED_ACQ pid=3454 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:53.285140 sshd[3449]: pam_unix(sshd:session): session closed for user core Mar 17 18:31:53.284000 audit[3449]: USER_END pid=3449 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:53.284000 audit[3449]: CRED_DISP pid=3449 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:53.287758 systemd-logind[1300]: Session 12 logged out. Waiting for processes to exit. Mar 17 18:31:53.287976 systemd[1]: sshd@11-10.0.0.124:22-10.0.0.1:41592.service: Deactivated successfully. Mar 17 18:31:53.286000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.124:22-10.0.0.1:41592 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:53.288802 systemd[1]: session-12.scope: Deactivated successfully. Mar 17 18:31:53.289218 systemd-logind[1300]: Removed session 12. Mar 17 18:31:53.436150 kubelet[2201]: I0317 18:31:53.435259 2201 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:31:53.437767 kubelet[2201]: E0317 18:31:53.437691 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:53.604000 audit[3518]: AVC avc: denied { write } for pid=3518 comm="tee" name="fd" dev="proc" ino=19653 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:31:53.604000 audit[3518]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff874aa27 a2=241 a3=1b6 items=1 ppid=3473 pid=3518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:53.604000 audit: CWD cwd="/etc/service/enabled/bird/log" Mar 17 18:31:53.604000 audit: PATH item=0 name="/dev/fd/63" inode=20525 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:31:53.604000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:31:53.607000 audit[3536]: AVC avc: denied { write } for pid=3536 comm="tee" name="fd" dev="proc" ino=19657 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:31:53.607000 audit[3536]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffcd4d6a26 a2=241 a3=1b6 items=1 ppid=3478 pid=3536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:53.607000 audit: CWD cwd="/etc/service/enabled/felix/log" Mar 17 18:31:53.607000 audit: PATH item=0 name="/dev/fd/63" inode=19650 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:31:53.607000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:31:53.609000 audit[3520]: AVC avc: denied { write } for pid=3520 comm="tee" name="fd" dev="proc" ino=19661 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:31:53.609000 audit[3520]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe45eaa26 a2=241 a3=1b6 items=1 ppid=3474 pid=3520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:53.609000 audit: CWD cwd="/etc/service/enabled/bird6/log" Mar 17 18:31:53.609000 audit: PATH item=0 name="/dev/fd/63" inode=19201 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:31:53.609000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:31:53.619000 audit[3528]: AVC avc: denied { write } for pid=3528 comm="tee" name="fd" dev="proc" ino=19219 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:31:53.619000 audit[3528]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdc58ea16 a2=241 a3=1b6 items=1 ppid=3475 pid=3528 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:53.619000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Mar 17 18:31:53.619000 audit: PATH item=0 name="/dev/fd/63" inode=19649 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:31:53.619000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:31:53.623000 audit[3549]: AVC avc: denied { write } for pid=3549 comm="tee" name="fd" dev="proc" ino=19223 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:31:53.623000 audit[3549]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffff9b72a28 a2=241 a3=1b6 items=1 ppid=3482 pid=3549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:53.623000 audit: CWD cwd="/etc/service/enabled/cni/log" Mar 17 18:31:53.623000 audit: PATH item=0 name="/dev/fd/63" inode=19216 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:31:53.623000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:31:53.624000 audit[3546]: AVC avc: denied { write } for pid=3546 comm="tee" name="fd" dev="proc" ino=19668 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:31:53.624000 audit[3546]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc0cd2a26 a2=241 a3=1b6 items=1 ppid=3481 pid=3546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:53.624000 audit: CWD cwd="/etc/service/enabled/confd/log" Mar 17 18:31:53.624000 audit: PATH item=0 name="/dev/fd/63" inode=18320 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:31:53.624000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:31:53.678000 audit[3554]: AVC avc: denied { write } for pid=3554 comm="tee" name="fd" dev="proc" ino=18334 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Mar 17 18:31:53.678000 audit[3554]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffffa48a17 a2=241 a3=1b6 items=1 ppid=3487 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:53.678000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Mar 17 18:31:53.678000 audit: PATH item=0 name="/dev/fd/63" inode=18323 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Mar 17 18:31:53.678000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Mar 17 18:31:56.315810 kubelet[2201]: I0317 18:31:56.315763 2201 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:31:56.316401 kubelet[2201]: E0317 18:31:56.316368 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:56.367000 audit[3609]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=3609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:31:56.367000 audit[3609]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=fffff7b3cdb0 a2=0 a3=1 items=0 ppid=2403 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:56.367000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:56.377000 audit[3609]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3609 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:31:56.377000 audit[3609]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff7b3cdb0 a2=0 a3=1 items=0 ppid=2403 pid=3609 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:56.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:31:56.441508 kubelet[2201]: E0317 18:31:56.441463 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:31:56.881000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.881000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.881000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.881000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.881000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.881000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.881000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.881000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.881000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.881000 audit: BPF prog-id=10 op=LOAD Mar 17 18:31:56.881000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc5b9ffb8 a2=98 a3=ffffc5b9ffa8 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:56.881000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:56.883000 audit: BPF prog-id=10 op=UNLOAD Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit: BPF prog-id=11 op=LOAD Mar 17 18:31:56.888000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc5b9fc48 a2=74 a3=95 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:56.888000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:56.888000 audit: BPF prog-id=11 op=UNLOAD Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:56.888000 audit: BPF prog-id=12 op=LOAD Mar 17 18:31:56.888000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc5b9fca8 a2=94 a3=2 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:56.888000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:56.888000 audit: BPF prog-id=12 op=UNLOAD Mar 17 18:31:57.010000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.010000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.010000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.010000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.010000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.010000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.010000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.010000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.010000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.010000 audit: BPF prog-id=13 op=LOAD Mar 17 18:31:57.010000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc5b9fc68 a2=40 a3=ffffc5b9fc98 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.010000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.010000 audit: BPF prog-id=13 op=UNLOAD Mar 17 18:31:57.010000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.010000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffc5b9fd80 a2=50 a3=0 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.010000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc5b9fcd8 a2=28 a3=ffffc5b9fe08 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc5b9fd08 a2=28 a3=ffffc5b9fe38 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc5b9fbb8 a2=28 a3=ffffc5b9fce8 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc5b9fd28 a2=28 a3=ffffc5b9fe58 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc5b9fd08 a2=28 a3=ffffc5b9fe38 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc5b9fcf8 a2=28 a3=ffffc5b9fe28 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc5b9fd28 a2=28 a3=ffffc5b9fe58 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc5b9fd08 a2=28 a3=ffffc5b9fe38 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc5b9fd28 a2=28 a3=ffffc5b9fe58 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc5b9fcf8 a2=28 a3=ffffc5b9fe28 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc5b9fd78 a2=28 a3=ffffc5b9feb8 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc5b9fab0 a2=50 a3=0 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit: BPF prog-id=14 op=LOAD Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc5b9fab8 a2=94 a3=5 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit: BPF prog-id=14 op=UNLOAD Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc5b9fbc0 a2=50 a3=0 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffc5b9fd08 a2=4 a3=3 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.026000 audit[3651]: AVC avc: denied { confidentiality } for pid=3651 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:31:57.026000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc5b9fce8 a2=94 a3=6 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.026000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { confidentiality } for pid=3651 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:31:57.027000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc5b9f4b8 a2=94 a3=83 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.027000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { perfmon } for pid=3651 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { bpf } for pid=3651 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.027000 audit[3651]: AVC avc: denied { confidentiality } for pid=3651 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:31:57.027000 audit[3651]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc5b9f4b8 a2=94 a3=83 items=0 ppid=3613 pid=3651 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.027000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { bpf } for pid=3677 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { bpf } for pid=3677 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { bpf } for pid=3677 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { bpf } for pid=3677 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit: BPF prog-id=15 op=LOAD Mar 17 18:31:57.043000 audit[3677]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd413ee98 a2=98 a3=ffffd413ee88 items=0 ppid=3613 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.043000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Mar 17 18:31:57.043000 audit: BPF prog-id=15 op=UNLOAD Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { bpf } for pid=3677 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { bpf } for pid=3677 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { bpf } for pid=3677 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { bpf } for pid=3677 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit: BPF prog-id=16 op=LOAD Mar 17 18:31:57.043000 audit[3677]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd413ed48 a2=74 a3=95 items=0 ppid=3613 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.043000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Mar 17 18:31:57.043000 audit: BPF prog-id=16 op=UNLOAD Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { bpf } for pid=3677 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { bpf } for pid=3677 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { perfmon } for pid=3677 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { bpf } for pid=3677 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit[3677]: AVC avc: denied { bpf } for pid=3677 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.043000 audit: BPF prog-id=17 op=LOAD Mar 17 18:31:57.043000 audit[3677]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd413ed78 a2=40 a3=ffffd413eda8 items=0 ppid=3613 pid=3677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.043000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Mar 17 18:31:57.043000 audit: BPF prog-id=17 op=UNLOAD Mar 17 18:31:57.098039 systemd-networkd[1094]: vxlan.calico: Link UP Mar 17 18:31:57.098048 systemd-networkd[1094]: vxlan.calico: Gained carrier Mar 17 18:31:57.119000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.119000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.119000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.119000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.119000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.119000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.119000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.119000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.119000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.119000 audit: BPF prog-id=18 op=LOAD Mar 17 18:31:57.119000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc3c6eb88 a2=98 a3=ffffc3c6eb78 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.119000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.120000 audit: BPF prog-id=18 op=UNLOAD Mar 17 18:31:57.120000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.120000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.120000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.120000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.120000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.120000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.120000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.120000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.120000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.120000 audit: BPF prog-id=19 op=LOAD Mar 17 18:31:57.120000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc3c6e868 a2=74 a3=95 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.120000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.120000 audit: BPF prog-id=19 op=UNLOAD Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit: BPF prog-id=20 op=LOAD Mar 17 18:31:57.121000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc3c6e8c8 a2=94 a3=2 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.121000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.121000 audit: BPF prog-id=20 op=UNLOAD Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc3c6e8f8 a2=28 a3=ffffc3c6ea28 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.121000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc3c6e928 a2=28 a3=ffffc3c6ea58 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.121000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc3c6e7d8 a2=28 a3=ffffc3c6e908 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.121000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc3c6e948 a2=28 a3=ffffc3c6ea78 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.121000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc3c6e928 a2=28 a3=ffffc3c6ea58 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.121000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc3c6e918 a2=28 a3=ffffc3c6ea48 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.121000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc3c6e948 a2=28 a3=ffffc3c6ea78 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.121000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc3c6e928 a2=28 a3=ffffc3c6ea58 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.121000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc3c6e948 a2=28 a3=ffffc3c6ea78 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.121000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc3c6e918 a2=28 a3=ffffc3c6ea48 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.121000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffc3c6e998 a2=28 a3=ffffc3c6ead8 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.121000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.121000 audit: BPF prog-id=21 op=LOAD Mar 17 18:31:57.121000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc3c6e7b8 a2=40 a3=ffffc3c6e7e8 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.121000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.121000 audit: BPF prog-id=21 op=UNLOAD Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffc3c6e7e0 a2=50 a3=0 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.122000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffc3c6e7e0 a2=50 a3=0 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.122000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit: BPF prog-id=22 op=LOAD Mar 17 18:31:57.122000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc3c6df48 a2=94 a3=2 items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.122000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.122000 audit: BPF prog-id=22 op=UNLOAD Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { perfmon } for pid=3706 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit[3706]: AVC avc: denied { bpf } for pid=3706 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.122000 audit: BPF prog-id=23 op=LOAD Mar 17 18:31:57.122000 audit[3706]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc3c6e0d8 a2=94 a3=2d items=0 ppid=3613 pid=3706 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.122000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Mar 17 18:31:57.127000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.127000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.127000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.127000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.127000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.127000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.127000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.127000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.127000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.127000 audit: BPF prog-id=24 op=LOAD Mar 17 18:31:57.127000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc19d1d78 a2=98 a3=ffffc19d1d68 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.127000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.127000 audit: BPF prog-id=24 op=UNLOAD Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit: BPF prog-id=25 op=LOAD Mar 17 18:31:57.128000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc19d1a08 a2=74 a3=95 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.128000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.128000 audit: BPF prog-id=25 op=UNLOAD Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.128000 audit: BPF prog-id=26 op=LOAD Mar 17 18:31:57.128000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc19d1a68 a2=94 a3=2 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.128000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.128000 audit: BPF prog-id=26 op=UNLOAD Mar 17 18:31:57.214000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.214000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.214000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.214000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.214000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.214000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.214000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.214000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.214000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.214000 audit: BPF prog-id=27 op=LOAD Mar 17 18:31:57.214000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc19d1a28 a2=40 a3=ffffc19d1a58 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.214000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.214000 audit: BPF prog-id=27 op=UNLOAD Mar 17 18:31:57.214000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.214000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffc19d1b40 a2=50 a3=0 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.214000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc19d1a98 a2=28 a3=ffffc19d1bc8 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc19d1ac8 a2=28 a3=ffffc19d1bf8 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc19d1978 a2=28 a3=ffffc19d1aa8 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc19d1ae8 a2=28 a3=ffffc19d1c18 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc19d1ac8 a2=28 a3=ffffc19d1bf8 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc19d1ab8 a2=28 a3=ffffc19d1be8 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc19d1ae8 a2=28 a3=ffffc19d1c18 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc19d1ac8 a2=28 a3=ffffc19d1bf8 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc19d1ae8 a2=28 a3=ffffc19d1c18 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc19d1ab8 a2=28 a3=ffffc19d1be8 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc19d1b38 a2=28 a3=ffffc19d1c78 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc19d1870 a2=50 a3=0 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit: BPF prog-id=28 op=LOAD Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc19d1878 a2=94 a3=5 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit: BPF prog-id=28 op=UNLOAD Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc19d1980 a2=50 a3=0 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffc19d1ac8 a2=4 a3=3 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { confidentiality } for pid=3710 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc19d1aa8 a2=94 a3=6 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { confidentiality } for pid=3710 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc19d1278 a2=94 a3=83 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { perfmon } for pid=3710 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.223000 audit[3710]: AVC avc: denied { confidentiality } for pid=3710 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Mar 17 18:31:57.223000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc19d1278 a2=94 a3=83 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.223000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.224000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.224000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc19d2cb8 a2=10 a3=ffffc19d2da8 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.224000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.224000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.224000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc19d2b78 a2=10 a3=ffffc19d2c68 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.224000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.224000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.224000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc19d2ae8 a2=10 a3=ffffc19d2c68 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.224000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.224000 audit[3710]: AVC avc: denied { bpf } for pid=3710 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Mar 17 18:31:57.224000 audit[3710]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffc19d2ae8 a2=10 a3=ffffc19d2c68 items=0 ppid=3613 pid=3710 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.224000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Mar 17 18:31:57.233000 audit: BPF prog-id=23 op=UNLOAD Mar 17 18:31:57.273000 audit[3736]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3736 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:31:57.273000 audit[3736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffd0969d00 a2=0 a3=ffffaf2f7fa8 items=0 ppid=3613 pid=3736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.273000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:31:57.278000 audit[3737]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=3737 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:31:57.278000 audit[3737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe92f0a90 a2=0 a3=ffff9ed64fa8 items=0 ppid=3613 pid=3737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.278000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:31:57.281000 audit[3739]: NETFILTER_CFG table=filter:99 family=2 entries=39 op=nft_register_chain pid=3739 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:31:57.281000 audit[3739]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffc0b50650 a2=0 a3=ffff83c5ffa8 items=0 ppid=3613 pid=3739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.281000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:31:57.287000 audit[3740]: NETFILTER_CFG table=raw:100 family=2 entries=21 op=nft_register_chain pid=3740 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:31:57.287000 audit[3740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffff2cee640 a2=0 a3=ffffa5c0dfa8 items=0 ppid=3613 pid=3740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:57.287000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:31:58.290267 systemd[1]: Started sshd@12-10.0.0.124:22-10.0.0.1:41594.service. Mar 17 18:31:58.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.124:22-10.0.0.1:41594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:58.291786 kernel: kauditd_printk_skb: 544 callbacks suppressed Mar 17 18:31:58.291849 kernel: audit: type=1130 audit(1742236318.289:439): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.124:22-10.0.0.1:41594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:58.330000 audit[3747]: USER_ACCT pid=3747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:58.331785 sshd[3747]: Accepted publickey for core from 10.0.0.1 port 41594 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:31:58.333220 sshd[3747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:31:58.331000 audit[3747]: CRED_ACQ pid=3747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:58.339138 kernel: audit: type=1101 audit(1742236318.330:440): pid=3747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:58.339200 kernel: audit: type=1103 audit(1742236318.331:441): pid=3747 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:58.339235 kernel: audit: type=1006 audit(1742236318.331:442): pid=3747 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Mar 17 18:31:58.338816 systemd-logind[1300]: New session 13 of user core. Mar 17 18:31:58.339579 systemd[1]: Started session-13.scope. Mar 17 18:31:58.331000 audit[3747]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc0e26900 a2=3 a3=1 items=0 ppid=1 pid=3747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:58.343486 kernel: audit: type=1300 audit(1742236318.331:442): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc0e26900 a2=3 a3=1 items=0 ppid=1 pid=3747 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:58.331000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:31:58.345605 kernel: audit: type=1327 audit(1742236318.331:442): proctitle=737368643A20636F7265205B707269765D Mar 17 18:31:58.345702 kernel: audit: type=1105 audit(1742236318.342:443): pid=3747 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:58.342000 audit[3747]: USER_START pid=3747 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:58.344000 audit[3750]: CRED_ACQ pid=3750 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:58.351339 kernel: audit: type=1103 audit(1742236318.344:444): pid=3750 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:58.477374 sshd[3747]: pam_unix(sshd:session): session closed for user core Mar 17 18:31:58.476000 audit[3747]: USER_END pid=3747 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:58.480068 systemd[1]: sshd@12-10.0.0.124:22-10.0.0.1:41594.service: Deactivated successfully. Mar 17 18:31:58.481015 systemd-logind[1300]: Session 13 logged out. Waiting for processes to exit. Mar 17 18:31:58.481056 systemd[1]: session-13.scope: Deactivated successfully. Mar 17 18:31:58.481809 systemd-logind[1300]: Removed session 13. Mar 17 18:31:58.477000 audit[3747]: CRED_DISP pid=3747 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:58.485082 kernel: audit: type=1106 audit(1742236318.476:445): pid=3747 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:58.485166 kernel: audit: type=1104 audit(1742236318.477:446): pid=3747 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:31:58.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.124:22-10.0.0.1:41594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:31:58.710538 systemd-networkd[1094]: vxlan.calico: Gained IPv6LL Mar 17 18:31:59.311332 env[1316]: time="2025-03-17T18:31:59.311274830Z" level=info msg="StopPodSandbox for \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\"" Mar 17 18:31:59.469531 env[1316]: 2025-03-17 18:31:59.375 [INFO][3779] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:31:59.469531 env[1316]: 2025-03-17 18:31:59.376 [INFO][3779] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" iface="eth0" netns="/var/run/netns/cni-a2a5a8cd-5537-cdc9-b84a-cfa616c06097" Mar 17 18:31:59.469531 env[1316]: 2025-03-17 18:31:59.377 [INFO][3779] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" iface="eth0" netns="/var/run/netns/cni-a2a5a8cd-5537-cdc9-b84a-cfa616c06097" Mar 17 18:31:59.469531 env[1316]: 2025-03-17 18:31:59.378 [INFO][3779] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" iface="eth0" netns="/var/run/netns/cni-a2a5a8cd-5537-cdc9-b84a-cfa616c06097" Mar 17 18:31:59.469531 env[1316]: 2025-03-17 18:31:59.378 [INFO][3779] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:31:59.469531 env[1316]: 2025-03-17 18:31:59.378 [INFO][3779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:31:59.469531 env[1316]: 2025-03-17 18:31:59.455 [INFO][3787] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" HandleID="k8s-pod-network.27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Workload="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:31:59.469531 env[1316]: 2025-03-17 18:31:59.456 [INFO][3787] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:31:59.469531 env[1316]: 2025-03-17 18:31:59.456 [INFO][3787] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:31:59.469531 env[1316]: 2025-03-17 18:31:59.465 [WARNING][3787] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" HandleID="k8s-pod-network.27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Workload="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:31:59.469531 env[1316]: 2025-03-17 18:31:59.465 [INFO][3787] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" HandleID="k8s-pod-network.27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Workload="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:31:59.469531 env[1316]: 2025-03-17 18:31:59.466 [INFO][3787] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:31:59.469531 env[1316]: 2025-03-17 18:31:59.468 [INFO][3779] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:31:59.470187 env[1316]: time="2025-03-17T18:31:59.470148833Z" level=info msg="TearDown network for sandbox \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\" successfully" Mar 17 18:31:59.470266 env[1316]: time="2025-03-17T18:31:59.470247754Z" level=info msg="StopPodSandbox for \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\" returns successfully" Mar 17 18:31:59.472367 systemd[1]: run-netns-cni\x2da2a5a8cd\x2d5537\x2dcdc9\x2db84a\x2dcfa616c06097.mount: Deactivated successfully. Mar 17 18:31:59.473469 env[1316]: time="2025-03-17T18:31:59.473441511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4f7cd,Uid:69ba96b0-551d-424c-b677-f69ea1cdb260,Namespace:calico-system,Attempt:1,}" Mar 17 18:31:59.595471 systemd-networkd[1094]: calia19cc077760: Link UP Mar 17 18:31:59.596472 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:31:59.596541 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calia19cc077760: link becomes ready Mar 17 18:31:59.598188 systemd-networkd[1094]: calia19cc077760: Gained carrier Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.520 [INFO][3794] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--4f7cd-eth0 csi-node-driver- calico-system 69ba96b0-551d-424c-b677-f69ea1cdb260 883 0 2025-03-17 18:31:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-4f7cd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia19cc077760 [] []}} ContainerID="afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" Namespace="calico-system" Pod="csi-node-driver-4f7cd" WorkloadEndpoint="localhost-k8s-csi--node--driver--4f7cd-" Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.520 [INFO][3794] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" Namespace="calico-system" Pod="csi-node-driver-4f7cd" WorkloadEndpoint="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.552 [INFO][3808] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" HandleID="k8s-pod-network.afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" Workload="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.565 [INFO][3808] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" HandleID="k8s-pod-network.afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" Workload="localhost-k8s-csi--node--driver--4f7cd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400011c670), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-4f7cd", "timestamp":"2025-03-17 18:31:59.55270179 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.565 [INFO][3808] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.565 [INFO][3808] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.567 [INFO][3808] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.568 [INFO][3808] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" host="localhost" Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.573 [INFO][3808] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.577 [INFO][3808] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.579 [INFO][3808] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.581 [INFO][3808] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.581 [INFO][3808] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" host="localhost" Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.582 [INFO][3808] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.586 [INFO][3808] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" host="localhost" Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.591 [INFO][3808] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" host="localhost" Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.591 [INFO][3808] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" host="localhost" Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.591 [INFO][3808] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:31:59.640063 env[1316]: 2025-03-17 18:31:59.591 [INFO][3808] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" HandleID="k8s-pod-network.afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" Workload="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:31:59.640662 env[1316]: 2025-03-17 18:31:59.593 [INFO][3794] cni-plugin/k8s.go 386: Populated endpoint ContainerID="afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" Namespace="calico-system" Pod="csi-node-driver-4f7cd" WorkloadEndpoint="localhost-k8s-csi--node--driver--4f7cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4f7cd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69ba96b0-551d-424c-b677-f69ea1cdb260", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-4f7cd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia19cc077760", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:31:59.640662 env[1316]: 2025-03-17 18:31:59.593 [INFO][3794] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" Namespace="calico-system" Pod="csi-node-driver-4f7cd" WorkloadEndpoint="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:31:59.640662 env[1316]: 2025-03-17 18:31:59.593 [INFO][3794] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia19cc077760 ContainerID="afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" Namespace="calico-system" Pod="csi-node-driver-4f7cd" WorkloadEndpoint="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:31:59.640662 env[1316]: 2025-03-17 18:31:59.597 [INFO][3794] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" Namespace="calico-system" Pod="csi-node-driver-4f7cd" WorkloadEndpoint="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:31:59.640662 env[1316]: 2025-03-17 18:31:59.597 [INFO][3794] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" Namespace="calico-system" Pod="csi-node-driver-4f7cd" WorkloadEndpoint="localhost-k8s-csi--node--driver--4f7cd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4f7cd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69ba96b0-551d-424c-b677-f69ea1cdb260", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e", Pod:"csi-node-driver-4f7cd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia19cc077760", MAC:"1a:ba:bc:93:cf:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:31:59.640662 env[1316]: 2025-03-17 18:31:59.627 [INFO][3794] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e" Namespace="calico-system" Pod="csi-node-driver-4f7cd" WorkloadEndpoint="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:31:59.647000 audit[3831]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3831 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:31:59.647000 audit[3831]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=ffffeb687b00 a2=0 a3=ffff8066bfa8 items=0 ppid=3613 pid=3831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:31:59.647000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:31:59.651629 env[1316]: time="2025-03-17T18:31:59.651565857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:31:59.651713 env[1316]: time="2025-03-17T18:31:59.651644818Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:31:59.651713 env[1316]: time="2025-03-17T18:31:59.651670978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:31:59.651860 env[1316]: time="2025-03-17T18:31:59.651829780Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e pid=3839 runtime=io.containerd.runc.v2 Mar 17 18:31:59.694519 systemd-resolved[1232]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:31:59.704681 env[1316]: time="2025-03-17T18:31:59.704632272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4f7cd,Uid:69ba96b0-551d-424c-b677-f69ea1cdb260,Namespace:calico-system,Attempt:1,} returns sandbox id \"afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e\"" Mar 17 18:31:59.707432 env[1316]: time="2025-03-17T18:31:59.706390053Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Mar 17 18:32:00.311491 env[1316]: time="2025-03-17T18:32:00.311452150Z" level=info msg="StopPodSandbox for \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\"" Mar 17 18:32:00.312335 env[1316]: time="2025-03-17T18:32:00.312308640Z" level=info msg="StopPodSandbox for \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\"" Mar 17 18:32:00.312676 env[1316]: time="2025-03-17T18:32:00.312401001Z" level=info msg="StopPodSandbox for \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\"" Mar 17 18:32:00.313231 env[1316]: time="2025-03-17T18:32:00.312585123Z" level=info msg="StopPodSandbox for \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\"" Mar 17 18:32:00.436942 env[1316]: 2025-03-17 18:32:00.376 [INFO][3921] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:32:00.436942 env[1316]: 2025-03-17 18:32:00.377 [INFO][3921] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" iface="eth0" netns="/var/run/netns/cni-02be8ed0-fdb1-a814-b88a-15aedf8cc206" Mar 17 18:32:00.436942 env[1316]: 2025-03-17 18:32:00.377 [INFO][3921] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" iface="eth0" netns="/var/run/netns/cni-02be8ed0-fdb1-a814-b88a-15aedf8cc206" Mar 17 18:32:00.436942 env[1316]: 2025-03-17 18:32:00.378 [INFO][3921] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" iface="eth0" netns="/var/run/netns/cni-02be8ed0-fdb1-a814-b88a-15aedf8cc206" Mar 17 18:32:00.436942 env[1316]: 2025-03-17 18:32:00.378 [INFO][3921] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:32:00.436942 env[1316]: 2025-03-17 18:32:00.378 [INFO][3921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:32:00.436942 env[1316]: 2025-03-17 18:32:00.423 [INFO][3965] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" HandleID="k8s-pod-network.bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Workload="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:00.436942 env[1316]: 2025-03-17 18:32:00.423 [INFO][3965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:00.436942 env[1316]: 2025-03-17 18:32:00.423 [INFO][3965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:00.436942 env[1316]: 2025-03-17 18:32:00.432 [WARNING][3965] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" HandleID="k8s-pod-network.bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Workload="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:00.436942 env[1316]: 2025-03-17 18:32:00.432 [INFO][3965] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" HandleID="k8s-pod-network.bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Workload="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:00.436942 env[1316]: 2025-03-17 18:32:00.434 [INFO][3965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:00.436942 env[1316]: 2025-03-17 18:32:00.435 [INFO][3921] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:32:00.440873 env[1316]: time="2025-03-17T18:32:00.440825817Z" level=info msg="TearDown network for sandbox \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\" successfully" Mar 17 18:32:00.440991 env[1316]: time="2025-03-17T18:32:00.440973858Z" level=info msg="StopPodSandbox for \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\" returns successfully" Mar 17 18:32:00.441397 kubelet[2201]: E0317 18:32:00.441370 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:00.442101 env[1316]: time="2025-03-17T18:32:00.442067911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-blwg5,Uid:5da61114-c0cd-4682-83c2-1d119dc4cf0e,Namespace:kube-system,Attempt:1,}" Mar 17 18:32:00.454504 env[1316]: 2025-03-17 18:32:00.387 [INFO][3942] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:32:00.454504 env[1316]: 2025-03-17 18:32:00.387 [INFO][3942] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" iface="eth0" netns="/var/run/netns/cni-d2da6fb0-5891-d834-c791-74b456be01d5" Mar 17 18:32:00.454504 env[1316]: 2025-03-17 18:32:00.388 [INFO][3942] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" iface="eth0" netns="/var/run/netns/cni-d2da6fb0-5891-d834-c791-74b456be01d5" Mar 17 18:32:00.454504 env[1316]: 2025-03-17 18:32:00.388 [INFO][3942] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" iface="eth0" netns="/var/run/netns/cni-d2da6fb0-5891-d834-c791-74b456be01d5" Mar 17 18:32:00.454504 env[1316]: 2025-03-17 18:32:00.388 [INFO][3942] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:32:00.454504 env[1316]: 2025-03-17 18:32:00.388 [INFO][3942] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:32:00.454504 env[1316]: 2025-03-17 18:32:00.428 [INFO][3972] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" HandleID="k8s-pod-network.cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Workload="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:00.454504 env[1316]: 2025-03-17 18:32:00.428 [INFO][3972] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:00.454504 env[1316]: 2025-03-17 18:32:00.434 [INFO][3972] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:00.454504 env[1316]: 2025-03-17 18:32:00.448 [WARNING][3972] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" HandleID="k8s-pod-network.cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Workload="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:00.454504 env[1316]: 2025-03-17 18:32:00.448 [INFO][3972] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" HandleID="k8s-pod-network.cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Workload="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:00.454504 env[1316]: 2025-03-17 18:32:00.450 [INFO][3972] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:00.454504 env[1316]: 2025-03-17 18:32:00.452 [INFO][3942] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:32:00.455039 env[1316]: time="2025-03-17T18:32:00.454641213Z" level=info msg="TearDown network for sandbox \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\" successfully" Mar 17 18:32:00.455039 env[1316]: time="2025-03-17T18:32:00.454676934Z" level=info msg="StopPodSandbox for \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\" returns successfully" Mar 17 18:32:00.455166 kubelet[2201]: E0317 18:32:00.454957 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:00.455486 env[1316]: time="2025-03-17T18:32:00.455458702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rgm7b,Uid:4d71cd08-81fa-44dc-85ff-58ab8ef8fce9,Namespace:kube-system,Attempt:1,}" Mar 17 18:32:00.468915 env[1316]: 2025-03-17 18:32:00.395 [INFO][3941] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:32:00.468915 env[1316]: 2025-03-17 18:32:00.395 [INFO][3941] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" iface="eth0" netns="/var/run/netns/cni-52b3d8b0-1700-80b2-fd85-8f2fb6bae20c" Mar 17 18:32:00.468915 env[1316]: 2025-03-17 18:32:00.395 [INFO][3941] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" iface="eth0" netns="/var/run/netns/cni-52b3d8b0-1700-80b2-fd85-8f2fb6bae20c" Mar 17 18:32:00.468915 env[1316]: 2025-03-17 18:32:00.396 [INFO][3941] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" iface="eth0" netns="/var/run/netns/cni-52b3d8b0-1700-80b2-fd85-8f2fb6bae20c" Mar 17 18:32:00.468915 env[1316]: 2025-03-17 18:32:00.396 [INFO][3941] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:32:00.468915 env[1316]: 2025-03-17 18:32:00.396 [INFO][3941] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:32:00.468915 env[1316]: 2025-03-17 18:32:00.431 [INFO][3973] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" HandleID="k8s-pod-network.4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Workload="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:00.468915 env[1316]: 2025-03-17 18:32:00.431 [INFO][3973] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:00.468915 env[1316]: 2025-03-17 18:32:00.450 [INFO][3973] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:00.468915 env[1316]: 2025-03-17 18:32:00.461 [WARNING][3973] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" HandleID="k8s-pod-network.4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Workload="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:00.468915 env[1316]: 2025-03-17 18:32:00.461 [INFO][3973] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" HandleID="k8s-pod-network.4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Workload="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:00.468915 env[1316]: 2025-03-17 18:32:00.462 [INFO][3973] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:00.468915 env[1316]: 2025-03-17 18:32:00.464 [INFO][3941] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:32:00.469332 env[1316]: time="2025-03-17T18:32:00.469024776Z" level=info msg="TearDown network for sandbox \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\" successfully" Mar 17 18:32:00.469332 env[1316]: time="2025-03-17T18:32:00.469053297Z" level=info msg="StopPodSandbox for \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\" returns successfully" Mar 17 18:32:00.469692 env[1316]: time="2025-03-17T18:32:00.469661264Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566884d556-6vd4w,Uid:dae16c91-a8cb-494a-86d1-5ba60d550a02,Namespace:calico-system,Attempt:1,}" Mar 17 18:32:00.473964 systemd[1]: run-netns-cni\x2d52b3d8b0\x2d1700\x2d80b2\x2dfd85\x2d8f2fb6bae20c.mount: Deactivated successfully. Mar 17 18:32:00.474097 systemd[1]: run-netns-cni\x2dd2da6fb0\x2d5891\x2dd834\x2dc791\x2d74b456be01d5.mount: Deactivated successfully. Mar 17 18:32:00.474199 systemd[1]: run-netns-cni\x2d02be8ed0\x2dfdb1\x2da814\x2db88a\x2d15aedf8cc206.mount: Deactivated successfully. Mar 17 18:32:00.488896 env[1316]: 2025-03-17 18:32:00.407 [INFO][3943] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:32:00.488896 env[1316]: 2025-03-17 18:32:00.407 [INFO][3943] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" iface="eth0" netns="/var/run/netns/cni-6f83b71f-11ae-ab50-d319-50da8ecf506f" Mar 17 18:32:00.488896 env[1316]: 2025-03-17 18:32:00.407 [INFO][3943] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" iface="eth0" netns="/var/run/netns/cni-6f83b71f-11ae-ab50-d319-50da8ecf506f" Mar 17 18:32:00.488896 env[1316]: 2025-03-17 18:32:00.407 [INFO][3943] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" iface="eth0" netns="/var/run/netns/cni-6f83b71f-11ae-ab50-d319-50da8ecf506f" Mar 17 18:32:00.488896 env[1316]: 2025-03-17 18:32:00.407 [INFO][3943] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:32:00.488896 env[1316]: 2025-03-17 18:32:00.407 [INFO][3943] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:32:00.488896 env[1316]: 2025-03-17 18:32:00.449 [INFO][3982] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" HandleID="k8s-pod-network.23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:00.488896 env[1316]: 2025-03-17 18:32:00.449 [INFO][3982] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:00.488896 env[1316]: 2025-03-17 18:32:00.462 [INFO][3982] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:00.488896 env[1316]: 2025-03-17 18:32:00.477 [WARNING][3982] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" HandleID="k8s-pod-network.23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:00.488896 env[1316]: 2025-03-17 18:32:00.477 [INFO][3982] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" HandleID="k8s-pod-network.23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:00.488896 env[1316]: 2025-03-17 18:32:00.479 [INFO][3982] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:00.488896 env[1316]: 2025-03-17 18:32:00.486 [INFO][3943] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:32:00.490880 env[1316]: time="2025-03-17T18:32:00.490843064Z" level=info msg="TearDown network for sandbox \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\" successfully" Mar 17 18:32:00.490984 env[1316]: time="2025-03-17T18:32:00.490968145Z" level=info msg="StopPodSandbox for \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\" returns successfully" Mar 17 18:32:00.492573 systemd[1]: run-netns-cni\x2d6f83b71f\x2d11ae\x2dab50\x2dd319\x2d50da8ecf506f.mount: Deactivated successfully. Mar 17 18:32:00.493725 env[1316]: time="2025-03-17T18:32:00.493598455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f7b5bdcb8-nw9hv,Uid:30b62e3f-2593-4372-ab8d-126ab81bae75,Namespace:calico-apiserver,Attempt:1,}" Mar 17 18:32:00.610430 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Mar 17 18:32:00.610554 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliceb5ac3e1d8: link becomes ready Mar 17 18:32:00.610921 systemd-networkd[1094]: caliceb5ac3e1d8: Link UP Mar 17 18:32:00.611054 systemd-networkd[1094]: caliceb5ac3e1d8: Gained carrier Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.516 [INFO][4008] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0 coredns-7db6d8ff4d- kube-system 4d71cd08-81fa-44dc-85ff-58ab8ef8fce9 895 0 2025-03-17 18:31:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-rgm7b eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliceb5ac3e1d8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rgm7b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rgm7b-" Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.516 [INFO][4008] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rgm7b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.562 [INFO][4057] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" HandleID="k8s-pod-network.57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" Workload="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.575 [INFO][4057] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" HandleID="k8s-pod-network.57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" Workload="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40004203f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-rgm7b", "timestamp":"2025-03-17 18:32:00.562472116 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.575 [INFO][4057] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.575 [INFO][4057] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.575 [INFO][4057] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.576 [INFO][4057] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" host="localhost" Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.580 [INFO][4057] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.585 [INFO][4057] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.586 [INFO][4057] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.588 [INFO][4057] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.588 [INFO][4057] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" host="localhost" Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.590 [INFO][4057] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5 Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.596 [INFO][4057] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" host="localhost" Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.604 [INFO][4057] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" host="localhost" Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.604 [INFO][4057] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" host="localhost" Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.604 [INFO][4057] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:00.629318 env[1316]: 2025-03-17 18:32:00.604 [INFO][4057] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" HandleID="k8s-pod-network.57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" Workload="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:00.629928 env[1316]: 2025-03-17 18:32:00.607 [INFO][4008] cni-plugin/k8s.go 386: Populated endpoint ContainerID="57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rgm7b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4d71cd08-81fa-44dc-85ff-58ab8ef8fce9", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-rgm7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliceb5ac3e1d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:00.629928 env[1316]: 2025-03-17 18:32:00.607 [INFO][4008] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rgm7b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:00.629928 env[1316]: 2025-03-17 18:32:00.607 [INFO][4008] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliceb5ac3e1d8 ContainerID="57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rgm7b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:00.629928 env[1316]: 2025-03-17 18:32:00.611 [INFO][4008] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rgm7b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:00.629928 env[1316]: 2025-03-17 18:32:00.611 [INFO][4008] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rgm7b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4d71cd08-81fa-44dc-85ff-58ab8ef8fce9", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5", Pod:"coredns-7db6d8ff4d-rgm7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliceb5ac3e1d8", MAC:"fe:2e:41:cd:ac:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:00.629928 env[1316]: 2025-03-17 18:32:00.625 [INFO][4008] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-rgm7b" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:00.647000 audit[4100]: NETFILTER_CFG table=filter:102 family=2 entries=38 op=nft_register_chain pid=4100 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:32:00.647000 audit[4100]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20336 a0=3 a1=fffff8c1c2e0 a2=0 a3=ffffb9f94fa8 items=0 ppid=3613 pid=4100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:00.647000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:32:00.658084 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali360d2a0b563: link becomes ready Mar 17 18:32:00.657851 systemd-networkd[1094]: cali360d2a0b563: Link UP Mar 17 18:32:00.657977 systemd-networkd[1094]: cali360d2a0b563: Gained carrier Mar 17 18:32:00.660767 env[1316]: time="2025-03-17T18:32:00.660010621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:32:00.660767 env[1316]: time="2025-03-17T18:32:00.660052022Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:32:00.660767 env[1316]: time="2025-03-17T18:32:00.660062982Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:32:00.664045 env[1316]: time="2025-03-17T18:32:00.663971386Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5 pid=4107 runtime=io.containerd.runc.v2 Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.516 [INFO][3996] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0 coredns-7db6d8ff4d- kube-system 5da61114-c0cd-4682-83c2-1d119dc4cf0e 894 0 2025-03-17 18:31:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-blwg5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali360d2a0b563 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-blwg5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--blwg5-" Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.516 [INFO][3996] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-blwg5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.564 [INFO][4051] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" HandleID="k8s-pod-network.d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" Workload="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.585 [INFO][4051] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" HandleID="k8s-pod-network.d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" Workload="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000302f70), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-blwg5", "timestamp":"2025-03-17 18:32:00.564307376 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.586 [INFO][4051] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.605 [INFO][4051] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.605 [INFO][4051] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.606 [INFO][4051] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" host="localhost" Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.612 [INFO][4051] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.628 [INFO][4051] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.635 [INFO][4051] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.638 [INFO][4051] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.638 [INFO][4051] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" host="localhost" Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.640 [INFO][4051] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.644 [INFO][4051] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" host="localhost" Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.650 [INFO][4051] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" host="localhost" Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.650 [INFO][4051] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" host="localhost" Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.650 [INFO][4051] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:00.669551 env[1316]: 2025-03-17 18:32:00.650 [INFO][4051] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" HandleID="k8s-pod-network.d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" Workload="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:00.670828 env[1316]: 2025-03-17 18:32:00.654 [INFO][3996] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-blwg5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5da61114-c0cd-4682-83c2-1d119dc4cf0e", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-blwg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali360d2a0b563", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:00.670828 env[1316]: 2025-03-17 18:32:00.654 [INFO][3996] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-blwg5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:00.670828 env[1316]: 2025-03-17 18:32:00.654 [INFO][3996] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali360d2a0b563 ContainerID="d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-blwg5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:00.670828 env[1316]: 2025-03-17 18:32:00.656 [INFO][3996] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-blwg5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:00.670828 env[1316]: 2025-03-17 18:32:00.657 [INFO][3996] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-blwg5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5da61114-c0cd-4682-83c2-1d119dc4cf0e", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae", Pod:"coredns-7db6d8ff4d-blwg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali360d2a0b563", MAC:"f6:20:5b:99:53:68", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:00.670828 env[1316]: 2025-03-17 18:32:00.666 [INFO][3996] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae" Namespace="kube-system" Pod="coredns-7db6d8ff4d-blwg5" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:00.683000 audit[4146]: NETFILTER_CFG table=filter:103 family=2 entries=34 op=nft_register_chain pid=4146 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:32:00.683000 audit[4146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18220 a0=3 a1=ffffd85d7180 a2=0 a3=ffff7f723fa8 items=0 ppid=3613 pid=4146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:00.683000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:32:00.688385 env[1316]: time="2025-03-17T18:32:00.688308782Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:32:00.688385 env[1316]: time="2025-03-17T18:32:00.688350143Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:32:00.688385 env[1316]: time="2025-03-17T18:32:00.688360503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:32:00.693001 env[1316]: time="2025-03-17T18:32:00.692037825Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae pid=4147 runtime=io.containerd.runc.v2 Mar 17 18:32:00.698937 systemd-networkd[1094]: calibb2dee2514a: Link UP Mar 17 18:32:00.700525 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calibb2dee2514a: link becomes ready Mar 17 18:32:00.700855 systemd-networkd[1094]: calibb2dee2514a: Gained carrier Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.547 [INFO][4023] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0 calico-kube-controllers-566884d556- calico-system dae16c91-a8cb-494a-86d1-5ba60d550a02 896 0 2025-03-17 18:31:36 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:566884d556 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-566884d556-6vd4w eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calibb2dee2514a [] []}} ContainerID="378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" Namespace="calico-system" Pod="calico-kube-controllers-566884d556-6vd4w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-" Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.547 [INFO][4023] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" Namespace="calico-system" Pod="calico-kube-controllers-566884d556-6vd4w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.616 [INFO][4066] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" HandleID="k8s-pod-network.378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" Workload="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.635 [INFO][4066] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" HandleID="k8s-pod-network.378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" Workload="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000332ca0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-566884d556-6vd4w", "timestamp":"2025-03-17 18:32:00.61662305 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.635 [INFO][4066] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.651 [INFO][4066] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.651 [INFO][4066] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.653 [INFO][4066] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" host="localhost" Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.659 [INFO][4066] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.672 [INFO][4066] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.674 [INFO][4066] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.676 [INFO][4066] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.676 [INFO][4066] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" host="localhost" Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.678 [INFO][4066] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302 Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.682 [INFO][4066] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" host="localhost" Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.690 [INFO][4066] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" host="localhost" Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.690 [INFO][4066] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" host="localhost" Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.690 [INFO][4066] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:00.716435 env[1316]: 2025-03-17 18:32:00.690 [INFO][4066] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" HandleID="k8s-pod-network.378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" Workload="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:00.717177 env[1316]: 2025-03-17 18:32:00.697 [INFO][4023] cni-plugin/k8s.go 386: Populated endpoint ContainerID="378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" Namespace="calico-system" Pod="calico-kube-controllers-566884d556-6vd4w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0", GenerateName:"calico-kube-controllers-566884d556-", Namespace:"calico-system", SelfLink:"", UID:"dae16c91-a8cb-494a-86d1-5ba60d550a02", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"566884d556", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-566884d556-6vd4w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb2dee2514a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:00.717177 env[1316]: 2025-03-17 18:32:00.697 [INFO][4023] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" Namespace="calico-system" Pod="calico-kube-controllers-566884d556-6vd4w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:00.717177 env[1316]: 2025-03-17 18:32:00.697 [INFO][4023] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibb2dee2514a ContainerID="378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" Namespace="calico-system" Pod="calico-kube-controllers-566884d556-6vd4w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:00.717177 env[1316]: 2025-03-17 18:32:00.700 [INFO][4023] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" Namespace="calico-system" Pod="calico-kube-controllers-566884d556-6vd4w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:00.717177 env[1316]: 2025-03-17 18:32:00.701 [INFO][4023] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" Namespace="calico-system" Pod="calico-kube-controllers-566884d556-6vd4w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0", GenerateName:"calico-kube-controllers-566884d556-", Namespace:"calico-system", SelfLink:"", UID:"dae16c91-a8cb-494a-86d1-5ba60d550a02", ResourceVersion:"896", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"566884d556", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302", Pod:"calico-kube-controllers-566884d556-6vd4w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb2dee2514a", MAC:"1e:cd:59:6d:f5:df", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:00.717177 env[1316]: 2025-03-17 18:32:00.710 [INFO][4023] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302" Namespace="calico-system" Pod="calico-kube-controllers-566884d556-6vd4w" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:00.727000 audit[4189]: NETFILTER_CFG table=filter:104 family=2 entries=42 op=nft_register_chain pid=4189 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:32:00.727000 audit[4189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21016 a0=3 a1=ffffc699dad0 a2=0 a3=ffffa76b6fa8 items=0 ppid=3613 pid=4189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:00.727000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:32:00.732648 systemd-resolved[1232]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:32:00.747533 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): calib2ee0f7b036: link becomes ready Mar 17 18:32:00.747067 systemd-networkd[1094]: calib2ee0f7b036: Link UP Mar 17 18:32:00.747426 systemd-networkd[1094]: calib2ee0f7b036: Gained carrier Mar 17 18:32:00.760707 env[1316]: time="2025-03-17T18:32:00.760162197Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:32:00.760707 env[1316]: time="2025-03-17T18:32:00.760225278Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:32:00.760707 env[1316]: time="2025-03-17T18:32:00.760236238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:32:00.760707 env[1316]: time="2025-03-17T18:32:00.760610402Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302 pid=4206 runtime=io.containerd.runc.v2 Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.557 [INFO][4037] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0 calico-apiserver-f7b5bdcb8- calico-apiserver 30b62e3f-2593-4372-ab8d-126ab81bae75 897 0 2025-03-17 18:31:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f7b5bdcb8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f7b5bdcb8-nw9hv eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib2ee0f7b036 [] []}} ContainerID="539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-nw9hv" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-" Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.558 [INFO][4037] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-nw9hv" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.627 [INFO][4073] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" HandleID="k8s-pod-network.539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.648 [INFO][4073] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" HandleID="k8s-pod-network.539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031e7d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f7b5bdcb8-nw9hv", "timestamp":"2025-03-17 18:32:00.627953978 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.648 [INFO][4073] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.690 [INFO][4073] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.690 [INFO][4073] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.692 [INFO][4073] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" host="localhost" Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.697 [INFO][4073] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.710 [INFO][4073] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.713 [INFO][4073] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.717 [INFO][4073] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.717 [INFO][4073] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" host="localhost" Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.719 [INFO][4073] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77 Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.723 [INFO][4073] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" host="localhost" Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.729 [INFO][4073] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" host="localhost" Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.729 [INFO][4073] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" host="localhost" Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.729 [INFO][4073] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:00.763304 env[1316]: 2025-03-17 18:32:00.729 [INFO][4073] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" HandleID="k8s-pod-network.539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:00.763871 env[1316]: 2025-03-17 18:32:00.733 [INFO][4037] cni-plugin/k8s.go 386: Populated endpoint ContainerID="539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-nw9hv" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0", GenerateName:"calico-apiserver-f7b5bdcb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"30b62e3f-2593-4372-ab8d-126ab81bae75", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f7b5bdcb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f7b5bdcb8-nw9hv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2ee0f7b036", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:00.763871 env[1316]: 2025-03-17 18:32:00.734 [INFO][4037] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-nw9hv" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:00.763871 env[1316]: 2025-03-17 18:32:00.734 [INFO][4037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib2ee0f7b036 ContainerID="539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-nw9hv" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:00.763871 env[1316]: 2025-03-17 18:32:00.746 [INFO][4037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-nw9hv" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:00.763871 env[1316]: 2025-03-17 18:32:00.747 [INFO][4037] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-nw9hv" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0", GenerateName:"calico-apiserver-f7b5bdcb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"30b62e3f-2593-4372-ab8d-126ab81bae75", ResourceVersion:"897", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f7b5bdcb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77", Pod:"calico-apiserver-f7b5bdcb8-nw9hv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2ee0f7b036", MAC:"26:45:ab:ca:37:19", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:00.763871 env[1316]: 2025-03-17 18:32:00.758 [INFO][4037] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-nw9hv" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:00.766381 systemd-resolved[1232]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:32:00.768000 audit[4230]: NETFILTER_CFG table=filter:105 family=2 entries=62 op=nft_register_chain pid=4230 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:32:00.768000 audit[4230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=31096 a0=3 a1=ffffd283ac20 a2=0 a3=ffff883f0fa8 items=0 ppid=3613 pid=4230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:00.768000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:32:00.771035 env[1316]: time="2025-03-17T18:32:00.770998080Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rgm7b,Uid:4d71cd08-81fa-44dc-85ff-58ab8ef8fce9,Namespace:kube-system,Attempt:1,} returns sandbox id \"57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5\"" Mar 17 18:32:00.772239 kubelet[2201]: E0317 18:32:00.771830 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:00.774752 env[1316]: time="2025-03-17T18:32:00.774722402Z" level=info msg="CreateContainer within sandbox \"57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:32:00.794512 env[1316]: time="2025-03-17T18:32:00.794468866Z" level=info msg="CreateContainer within sandbox \"57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f820c5bafe30ba465d4670641840c73fa477583e369658f9aa5ac467a5b20dfa\"" Mar 17 18:32:00.795314 env[1316]: time="2025-03-17T18:32:00.795280795Z" level=info msg="StartContainer for \"f820c5bafe30ba465d4670641840c73fa477583e369658f9aa5ac467a5b20dfa\"" Mar 17 18:32:00.795791 env[1316]: time="2025-03-17T18:32:00.795729840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:32:00.795899 env[1316]: time="2025-03-17T18:32:00.795877082Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:32:00.796040 env[1316]: time="2025-03-17T18:32:00.796010763Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:32:00.796554 env[1316]: time="2025-03-17T18:32:00.796504889Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77 pid=4259 runtime=io.containerd.runc.v2 Mar 17 18:32:00.804137 env[1316]: time="2025-03-17T18:32:00.804096535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-blwg5,Uid:5da61114-c0cd-4682-83c2-1d119dc4cf0e,Namespace:kube-system,Attempt:1,} returns sandbox id \"d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae\"" Mar 17 18:32:00.805076 kubelet[2201]: E0317 18:32:00.804965 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:00.811214 systemd-resolved[1232]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:32:00.812827 env[1316]: time="2025-03-17T18:32:00.812780433Z" level=info msg="CreateContainer within sandbox \"d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 17 18:32:00.841671 env[1316]: time="2025-03-17T18:32:00.841620760Z" level=info msg="CreateContainer within sandbox \"d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3053add67629e4edcd4a8c3362e5eabcebf45bb2666966a70825a7f4bbc953f7\"" Mar 17 18:32:00.844004 env[1316]: time="2025-03-17T18:32:00.843642063Z" level=info msg="StartContainer for \"3053add67629e4edcd4a8c3362e5eabcebf45bb2666966a70825a7f4bbc953f7\"" Mar 17 18:32:00.852554 systemd-resolved[1232]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:32:00.856800 env[1316]: time="2025-03-17T18:32:00.856765372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-566884d556-6vd4w,Uid:dae16c91-a8cb-494a-86d1-5ba60d550a02,Namespace:calico-system,Attempt:1,} returns sandbox id \"378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302\"" Mar 17 18:32:00.869550 env[1316]: time="2025-03-17T18:32:00.869314674Z" level=info msg="StartContainer for \"f820c5bafe30ba465d4670641840c73fa477583e369658f9aa5ac467a5b20dfa\" returns successfully" Mar 17 18:32:00.877883 env[1316]: time="2025-03-17T18:32:00.877796890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f7b5bdcb8-nw9hv,Uid:30b62e3f-2593-4372-ab8d-126ab81bae75,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77\"" Mar 17 18:32:00.910541 env[1316]: time="2025-03-17T18:32:00.910441421Z" level=info msg="StartContainer for \"3053add67629e4edcd4a8c3362e5eabcebf45bb2666966a70825a7f4bbc953f7\" returns successfully" Mar 17 18:32:01.254801 env[1316]: time="2025-03-17T18:32:01.254688941Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:01.256449 env[1316]: time="2025-03-17T18:32:01.256404480Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:01.260537 env[1316]: time="2025-03-17T18:32:01.260506245Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:01.261661 env[1316]: time="2025-03-17T18:32:01.261632178Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:01.262296 env[1316]: time="2025-03-17T18:32:01.262265545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Mar 17 18:32:01.264666 env[1316]: time="2025-03-17T18:32:01.264619411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Mar 17 18:32:01.266166 env[1316]: time="2025-03-17T18:32:01.266120708Z" level=info msg="CreateContainer within sandbox \"afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 18:32:01.282711 env[1316]: time="2025-03-17T18:32:01.282657851Z" level=info msg="CreateContainer within sandbox \"afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a32e6ef923184faeb5de83c9b7d1d0c33d0b9d71f95b6cf70b05e92f891e7389\"" Mar 17 18:32:01.283284 env[1316]: time="2025-03-17T18:32:01.283255698Z" level=info msg="StartContainer for \"a32e6ef923184faeb5de83c9b7d1d0c33d0b9d71f95b6cf70b05e92f891e7389\"" Mar 17 18:32:01.311240 env[1316]: time="2025-03-17T18:32:01.311194608Z" level=info msg="StopPodSandbox for \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\"" Mar 17 18:32:01.349249 env[1316]: time="2025-03-17T18:32:01.349199069Z" level=info msg="StartContainer for \"a32e6ef923184faeb5de83c9b7d1d0c33d0b9d71f95b6cf70b05e92f891e7389\" returns successfully" Mar 17 18:32:01.392798 env[1316]: 2025-03-17 18:32:01.361 [INFO][4424] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:32:01.392798 env[1316]: 2025-03-17 18:32:01.361 [INFO][4424] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" iface="eth0" netns="/var/run/netns/cni-b63d9013-cef6-efc6-1527-5a9b96f7bc75" Mar 17 18:32:01.392798 env[1316]: 2025-03-17 18:32:01.361 [INFO][4424] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" iface="eth0" netns="/var/run/netns/cni-b63d9013-cef6-efc6-1527-5a9b96f7bc75" Mar 17 18:32:01.392798 env[1316]: 2025-03-17 18:32:01.361 [INFO][4424] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" iface="eth0" netns="/var/run/netns/cni-b63d9013-cef6-efc6-1527-5a9b96f7bc75" Mar 17 18:32:01.392798 env[1316]: 2025-03-17 18:32:01.361 [INFO][4424] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:32:01.392798 env[1316]: 2025-03-17 18:32:01.361 [INFO][4424] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:32:01.392798 env[1316]: 2025-03-17 18:32:01.380 [INFO][4442] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" HandleID="k8s-pod-network.cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:01.392798 env[1316]: 2025-03-17 18:32:01.380 [INFO][4442] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:01.392798 env[1316]: 2025-03-17 18:32:01.380 [INFO][4442] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:01.392798 env[1316]: 2025-03-17 18:32:01.388 [WARNING][4442] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" HandleID="k8s-pod-network.cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:01.392798 env[1316]: 2025-03-17 18:32:01.388 [INFO][4442] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" HandleID="k8s-pod-network.cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:01.392798 env[1316]: 2025-03-17 18:32:01.389 [INFO][4442] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:01.392798 env[1316]: 2025-03-17 18:32:01.391 [INFO][4424] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:32:01.393244 env[1316]: time="2025-03-17T18:32:01.392941594Z" level=info msg="TearDown network for sandbox \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\" successfully" Mar 17 18:32:01.393244 env[1316]: time="2025-03-17T18:32:01.392979795Z" level=info msg="StopPodSandbox for \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\" returns successfully" Mar 17 18:32:01.393747 env[1316]: time="2025-03-17T18:32:01.393694843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f7b5bdcb8-jwwvk,Uid:a4b94281-bded-4df7-a0d8-c157b33b0138,Namespace:calico-apiserver,Attempt:1,}" Mar 17 18:32:01.399537 systemd-networkd[1094]: calia19cc077760: Gained IPv6LL Mar 17 18:32:01.452231 kubelet[2201]: E0317 18:32:01.452189 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:01.459646 kubelet[2201]: E0317 18:32:01.459614 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:01.464924 kubelet[2201]: I0317 18:32:01.464862 2201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rgm7b" podStartSLOduration=32.464848992 podStartE2EDuration="32.464848992s" podCreationTimestamp="2025-03-17 18:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:32:01.460952509 +0000 UTC m=+47.236308739" watchObservedRunningTime="2025-03-17 18:32:01.464848992 +0000 UTC m=+47.240205222" Mar 17 18:32:01.485396 systemd[1]: run-netns-cni\x2db63d9013\x2dcef6\x2defc6\x2d1527\x2d5a9b96f7bc75.mount: Deactivated successfully. Mar 17 18:32:01.494609 kubelet[2201]: I0317 18:32:01.494517 2201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-blwg5" podStartSLOduration=32.494501441 podStartE2EDuration="32.494501441s" podCreationTimestamp="2025-03-17 18:31:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-17 18:32:01.491888932 +0000 UTC m=+47.267245162" watchObservedRunningTime="2025-03-17 18:32:01.494501441 +0000 UTC m=+47.269857631" Mar 17 18:32:01.494000 audit[4472]: NETFILTER_CFG table=filter:106 family=2 entries=16 op=nft_register_rule pid=4472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:01.494000 audit[4472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffe1d7f950 a2=0 a3=1 items=0 ppid=2403 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:01.494000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:01.502000 audit[4472]: NETFILTER_CFG table=nat:107 family=2 entries=14 op=nft_register_rule pid=4472 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:01.502000 audit[4472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe1d7f950 a2=0 a3=1 items=0 ppid=2403 pid=4472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:01.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:01.519000 audit[4475]: NETFILTER_CFG table=filter:108 family=2 entries=13 op=nft_register_rule pid=4475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:01.519000 audit[4475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffe200bf40 a2=0 a3=1 items=0 ppid=2403 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:01.519000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:01.530603 systemd-networkd[1094]: cali665280dfb6c: Link UP Mar 17 18:32:01.532289 systemd-networkd[1094]: cali665280dfb6c: Gained carrier Mar 17 18:32:01.532453 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali665280dfb6c: link becomes ready Mar 17 18:32:01.540000 audit[4475]: NETFILTER_CFG table=nat:109 family=2 entries=47 op=nft_register_chain pid=4475 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:01.540000 audit[4475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffe200bf40 a2=0 a3=1 items=0 ppid=2403 pid=4475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:01.540000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.433 [INFO][4450] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0 calico-apiserver-f7b5bdcb8- calico-apiserver a4b94281-bded-4df7-a0d8-c157b33b0138 928 0 2025-03-17 18:31:36 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f7b5bdcb8 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f7b5bdcb8-jwwvk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali665280dfb6c [] []}} ContainerID="e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-jwwvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-" Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.433 [INFO][4450] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-jwwvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.477 [INFO][4464] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" HandleID="k8s-pod-network.e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.496 [INFO][4464] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" HandleID="k8s-pod-network.e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f3ee0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f7b5bdcb8-jwwvk", "timestamp":"2025-03-17 18:32:01.477155768 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.496 [INFO][4464] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.496 [INFO][4464] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.496 [INFO][4464] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.500 [INFO][4464] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" host="localhost" Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.505 [INFO][4464] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.510 [INFO][4464] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.511 [INFO][4464] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.513 [INFO][4464] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.513 [INFO][4464] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" host="localhost" Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.515 [INFO][4464] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487 Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.518 [INFO][4464] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" host="localhost" Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.524 [INFO][4464] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" host="localhost" Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.525 [INFO][4464] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" host="localhost" Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.525 [INFO][4464] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:01.548445 env[1316]: 2025-03-17 18:32:01.525 [INFO][4464] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" HandleID="k8s-pod-network.e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:01.548980 env[1316]: 2025-03-17 18:32:01.527 [INFO][4450] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-jwwvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0", GenerateName:"calico-apiserver-f7b5bdcb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4b94281-bded-4df7-a0d8-c157b33b0138", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f7b5bdcb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f7b5bdcb8-jwwvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali665280dfb6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:01.548980 env[1316]: 2025-03-17 18:32:01.527 [INFO][4450] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-jwwvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:01.548980 env[1316]: 2025-03-17 18:32:01.527 [INFO][4450] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali665280dfb6c ContainerID="e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-jwwvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:01.548980 env[1316]: 2025-03-17 18:32:01.532 [INFO][4450] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-jwwvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:01.548980 env[1316]: 2025-03-17 18:32:01.534 [INFO][4450] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-jwwvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0", GenerateName:"calico-apiserver-f7b5bdcb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4b94281-bded-4df7-a0d8-c157b33b0138", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f7b5bdcb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487", Pod:"calico-apiserver-f7b5bdcb8-jwwvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali665280dfb6c", MAC:"7a:9c:c3:14:e0:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:01.548980 env[1316]: 2025-03-17 18:32:01.546 [INFO][4450] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487" Namespace="calico-apiserver" Pod="calico-apiserver-f7b5bdcb8-jwwvk" WorkloadEndpoint="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:01.556000 audit[4492]: NETFILTER_CFG table=filter:110 family=2 entries=46 op=nft_register_chain pid=4492 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Mar 17 18:32:01.556000 audit[4492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23876 a0=3 a1=ffffc03f38e0 a2=0 a3=ffff9d58cfa8 items=0 ppid=3613 pid=4492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:01.556000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Mar 17 18:32:01.561608 env[1316]: time="2025-03-17T18:32:01.561539984Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 18:32:01.561608 env[1316]: time="2025-03-17T18:32:01.561583385Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 18:32:01.561742 env[1316]: time="2025-03-17T18:32:01.561596145Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 18:32:01.561789 env[1316]: time="2025-03-17T18:32:01.561738506Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487 pid=4500 runtime=io.containerd.runc.v2 Mar 17 18:32:01.598575 systemd-resolved[1232]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 17 18:32:01.616617 env[1316]: time="2025-03-17T18:32:01.616569555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f7b5bdcb8-jwwvk,Uid:a4b94281-bded-4df7-a0d8-c157b33b0138,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487\"" Mar 17 18:32:02.102724 systemd-networkd[1094]: caliceb5ac3e1d8: Gained IPv6LL Mar 17 18:32:02.473233 kubelet[2201]: E0317 18:32:02.473106 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:02.473873 kubelet[2201]: E0317 18:32:02.473832 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:02.486559 systemd-networkd[1094]: calibb2dee2514a: Gained IPv6LL Mar 17 18:32:02.550536 systemd-networkd[1094]: cali360d2a0b563: Gained IPv6LL Mar 17 18:32:02.550775 systemd-networkd[1094]: calib2ee0f7b036: Gained IPv6LL Mar 17 18:32:02.998522 systemd-networkd[1094]: cali665280dfb6c: Gained IPv6LL Mar 17 18:32:03.433027 env[1316]: time="2025-03-17T18:32:03.432974957Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:03.434512 env[1316]: time="2025-03-17T18:32:03.434470492Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:03.435850 env[1316]: time="2025-03-17T18:32:03.435815667Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:03.437366 env[1316]: time="2025-03-17T18:32:03.437334563Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:03.437794 env[1316]: time="2025-03-17T18:32:03.437754287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Mar 17 18:32:03.441125 env[1316]: time="2025-03-17T18:32:03.440621478Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Mar 17 18:32:03.451497 env[1316]: time="2025-03-17T18:32:03.449674014Z" level=info msg="CreateContainer within sandbox \"378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 17 18:32:03.480139 systemd[1]: Started sshd@13-10.0.0.124:22-10.0.0.1:35610.service. Mar 17 18:32:03.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.124:22-10.0.0.1:35610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:03.481119 kernel: kauditd_printk_skb: 31 callbacks suppressed Mar 17 18:32:03.481194 kernel: audit: type=1130 audit(1742236323.478:458): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.124:22-10.0.0.1:35610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:03.504946 kubelet[2201]: E0317 18:32:03.504331 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:03.532243 env[1316]: time="2025-03-17T18:32:03.532197493Z" level=info msg="CreateContainer within sandbox \"378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"910fe46f65a335b82d6a3a8a1cc8bccb3d582b7bac1c07cfabc18c2168da1f8f\"" Mar 17 18:32:03.534143 env[1316]: time="2025-03-17T18:32:03.533337745Z" level=info msg="StartContainer for \"910fe46f65a335b82d6a3a8a1cc8bccb3d582b7bac1c07cfabc18c2168da1f8f\"" Mar 17 18:32:03.543000 audit[4543]: USER_ACCT pid=4543 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:03.544000 audit[4543]: CRED_ACQ pid=4543 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:03.546427 sshd[4543]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:32:03.549882 sshd[4543]: Accepted publickey for core from 10.0.0.1 port 35610 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:32:03.551729 kernel: audit: type=1101 audit(1742236323.543:459): pid=4543 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:03.551788 kernel: audit: type=1103 audit(1742236323.544:460): pid=4543 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:03.551815 kernel: audit: type=1006 audit(1742236323.544:461): pid=4543 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Mar 17 18:32:03.544000 audit[4543]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeec87640 a2=3 a3=1 items=0 ppid=1 pid=4543 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:03.555633 systemd[1]: Started session-14.scope. Mar 17 18:32:03.556576 systemd-logind[1300]: New session 14 of user core. Mar 17 18:32:03.556840 kernel: audit: type=1300 audit(1742236323.544:461): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeec87640 a2=3 a3=1 items=0 ppid=1 pid=4543 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:03.544000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:03.558605 kernel: audit: type=1327 audit(1742236323.544:461): proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:03.559000 audit[4543]: USER_START pid=4543 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:03.563000 audit[4561]: CRED_ACQ pid=4561 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:03.567996 kernel: audit: type=1105 audit(1742236323.559:462): pid=4543 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:03.568062 kernel: audit: type=1103 audit(1742236323.563:463): pid=4561 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:03.623799 env[1316]: time="2025-03-17T18:32:03.623755067Z" level=info msg="StartContainer for \"910fe46f65a335b82d6a3a8a1cc8bccb3d582b7bac1c07cfabc18c2168da1f8f\" returns successfully" Mar 17 18:32:03.733501 sshd[4543]: pam_unix(sshd:session): session closed for user core Mar 17 18:32:03.733000 audit[4543]: USER_END pid=4543 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:03.736690 systemd-logind[1300]: Session 14 logged out. Waiting for processes to exit. Mar 17 18:32:03.733000 audit[4543]: CRED_DISP pid=4543 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:03.740245 systemd[1]: sshd@13-10.0.0.124:22-10.0.0.1:35610.service: Deactivated successfully. Mar 17 18:32:03.741067 systemd[1]: session-14.scope: Deactivated successfully. Mar 17 18:32:03.742124 systemd-logind[1300]: Removed session 14. Mar 17 18:32:03.743184 kernel: audit: type=1106 audit(1742236323.733:464): pid=4543 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:03.743281 kernel: audit: type=1104 audit(1742236323.733:465): pid=4543 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:03.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.124:22-10.0.0.1:35610 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:04.551260 kubelet[2201]: I0317 18:32:04.550939 2201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-566884d556-6vd4w" podStartSLOduration=25.970723918 podStartE2EDuration="28.550921504s" podCreationTimestamp="2025-03-17 18:31:36 +0000 UTC" firstStartedPulling="2025-03-17 18:32:00.85833063 +0000 UTC m=+46.633686860" lastFinishedPulling="2025-03-17 18:32:03.438528216 +0000 UTC m=+49.213884446" observedRunningTime="2025-03-17 18:32:04.488399092 +0000 UTC m=+50.263755322" watchObservedRunningTime="2025-03-17 18:32:04.550921504 +0000 UTC m=+50.326277694" Mar 17 18:32:04.663459 kubelet[2201]: I0317 18:32:04.663373 2201 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:32:04.664240 kubelet[2201]: E0317 18:32:04.664199 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:04.681357 systemd[1]: run-containerd-runc-k8s.io-b7e024883cfdbabed82d39239a3e8b8cd5edb77f6d7a4b7a4622a2af43781e3c-runc.uR9FBi.mount: Deactivated successfully. Mar 17 18:32:05.480550 kubelet[2201]: E0317 18:32:05.480186 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:05.908390 env[1316]: time="2025-03-17T18:32:05.908329585Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:05.910190 env[1316]: time="2025-03-17T18:32:05.910162124Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:05.911484 env[1316]: time="2025-03-17T18:32:05.911448177Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:05.912764 env[1316]: time="2025-03-17T18:32:05.912711830Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:05.913328 env[1316]: time="2025-03-17T18:32:05.913287836Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Mar 17 18:32:05.914552 env[1316]: time="2025-03-17T18:32:05.914525328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Mar 17 18:32:05.917376 env[1316]: time="2025-03-17T18:32:05.917335197Z" level=info msg="CreateContainer within sandbox \"539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 18:32:05.927691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2290580321.mount: Deactivated successfully. Mar 17 18:32:05.929479 env[1316]: time="2025-03-17T18:32:05.929438201Z" level=info msg="CreateContainer within sandbox \"539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d33d79f4640b5eaa30fe940b6ae3c801a6673f74f3daa2a67548b27fb3628f92\"" Mar 17 18:32:05.929964 env[1316]: time="2025-03-17T18:32:05.929938206Z" level=info msg="StartContainer for \"d33d79f4640b5eaa30fe940b6ae3c801a6673f74f3daa2a67548b27fb3628f92\"" Mar 17 18:32:06.026329 env[1316]: time="2025-03-17T18:32:06.026282190Z" level=info msg="StartContainer for \"d33d79f4640b5eaa30fe940b6ae3c801a6673f74f3daa2a67548b27fb3628f92\" returns successfully" Mar 17 18:32:06.526000 audit[4701]: NETFILTER_CFG table=filter:111 family=2 entries=10 op=nft_register_rule pid=4701 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:06.526000 audit[4701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffcdb14e20 a2=0 a3=1 items=0 ppid=2403 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:06.526000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:06.531000 audit[4701]: NETFILTER_CFG table=nat:112 family=2 entries=20 op=nft_register_rule pid=4701 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:06.531000 audit[4701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffcdb14e20 a2=0 a3=1 items=0 ppid=2403 pid=4701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:06.531000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:07.489554 kubelet[2201]: I0317 18:32:07.489518 2201 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:32:07.707238 env[1316]: time="2025-03-17T18:32:07.707177763Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:07.775596 env[1316]: time="2025-03-17T18:32:07.775479120Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:07.803879 env[1316]: time="2025-03-17T18:32:07.803844761Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:07.808153 env[1316]: time="2025-03-17T18:32:07.808109363Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:07.808626 env[1316]: time="2025-03-17T18:32:07.808591728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Mar 17 18:32:07.810503 env[1316]: time="2025-03-17T18:32:07.809771780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Mar 17 18:32:07.811567 env[1316]: time="2025-03-17T18:32:07.811537037Z" level=info msg="CreateContainer within sandbox \"afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 18:32:07.822889 env[1316]: time="2025-03-17T18:32:07.822844269Z" level=info msg="CreateContainer within sandbox \"afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5cbc8cdb4b08837e9b0fc0f52240a6ea724006186a1bfa2ffda921ed121ee9e9\"" Mar 17 18:32:07.824073 env[1316]: time="2025-03-17T18:32:07.823475675Z" level=info msg="StartContainer for \"5cbc8cdb4b08837e9b0fc0f52240a6ea724006186a1bfa2ffda921ed121ee9e9\"" Mar 17 18:32:07.842971 systemd[1]: run-containerd-runc-k8s.io-5cbc8cdb4b08837e9b0fc0f52240a6ea724006186a1bfa2ffda921ed121ee9e9-runc.4lBYCF.mount: Deactivated successfully. Mar 17 18:32:07.904807 env[1316]: time="2025-03-17T18:32:07.904758040Z" level=info msg="StartContainer for \"5cbc8cdb4b08837e9b0fc0f52240a6ea724006186a1bfa2ffda921ed121ee9e9\" returns successfully" Mar 17 18:32:08.139482 env[1316]: time="2025-03-17T18:32:08.139441264Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:08.140928 env[1316]: time="2025-03-17T18:32:08.140896878Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:08.142081 env[1316]: time="2025-03-17T18:32:08.142057329Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:08.143222 env[1316]: time="2025-03-17T18:32:08.143197860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Mar 17 18:32:08.143687 env[1316]: time="2025-03-17T18:32:08.143660065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Mar 17 18:32:08.146040 env[1316]: time="2025-03-17T18:32:08.146008848Z" level=info msg="CreateContainer within sandbox \"e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 17 18:32:08.158204 env[1316]: time="2025-03-17T18:32:08.158165326Z" level=info msg="CreateContainer within sandbox \"e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b6c2c1467a652f02f4170beba6edc6a6b12c3c97363301c86c176bfb38b488ce\"" Mar 17 18:32:08.158606 env[1316]: time="2025-03-17T18:32:08.158579810Z" level=info msg="StartContainer for \"b6c2c1467a652f02f4170beba6edc6a6b12c3c97363301c86c176bfb38b488ce\"" Mar 17 18:32:08.217558 env[1316]: time="2025-03-17T18:32:08.217501305Z" level=info msg="StartContainer for \"b6c2c1467a652f02f4170beba6edc6a6b12c3c97363301c86c176bfb38b488ce\" returns successfully" Mar 17 18:32:08.391674 kubelet[2201]: I0317 18:32:08.391328 2201 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 18:32:08.393070 kubelet[2201]: I0317 18:32:08.393047 2201 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 18:32:08.511529 kubelet[2201]: I0317 18:32:08.505766 2201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f7b5bdcb8-nw9hv" podStartSLOduration=27.470757656 podStartE2EDuration="32.505750395s" podCreationTimestamp="2025-03-17 18:31:36 +0000 UTC" firstStartedPulling="2025-03-17 18:32:00.879119265 +0000 UTC m=+46.654475495" lastFinishedPulling="2025-03-17 18:32:05.914112044 +0000 UTC m=+51.689468234" observedRunningTime="2025-03-17 18:32:06.514333226 +0000 UTC m=+52.289689456" watchObservedRunningTime="2025-03-17 18:32:08.505750395 +0000 UTC m=+54.281106625" Mar 17 18:32:08.511529 kubelet[2201]: I0317 18:32:08.506010 2201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f7b5bdcb8-jwwvk" podStartSLOduration=25.979179212 podStartE2EDuration="32.506003717s" podCreationTimestamp="2025-03-17 18:31:36 +0000 UTC" firstStartedPulling="2025-03-17 18:32:01.618019931 +0000 UTC m=+47.393376161" lastFinishedPulling="2025-03-17 18:32:08.144844436 +0000 UTC m=+53.920200666" observedRunningTime="2025-03-17 18:32:08.505167229 +0000 UTC m=+54.280523459" watchObservedRunningTime="2025-03-17 18:32:08.506003717 +0000 UTC m=+54.281359947" Mar 17 18:32:08.517160 kubelet[2201]: I0317 18:32:08.515563 2201 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4f7cd" podStartSLOduration=24.41192652 podStartE2EDuration="32.51553873s" podCreationTimestamp="2025-03-17 18:31:36 +0000 UTC" firstStartedPulling="2025-03-17 18:31:59.705980688 +0000 UTC m=+45.481336918" lastFinishedPulling="2025-03-17 18:32:07.809592898 +0000 UTC m=+53.584949128" observedRunningTime="2025-03-17 18:32:08.515263688 +0000 UTC m=+54.290619918" watchObservedRunningTime="2025-03-17 18:32:08.51553873 +0000 UTC m=+54.290894920" Mar 17 18:32:08.523000 audit[4787]: NETFILTER_CFG table=filter:113 family=2 entries=10 op=nft_register_rule pid=4787 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:08.528317 kernel: kauditd_printk_skb: 7 callbacks suppressed Mar 17 18:32:08.528402 kernel: audit: type=1325 audit(1742236328.523:469): table=filter:113 family=2 entries=10 op=nft_register_rule pid=4787 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:08.528460 kernel: audit: type=1300 audit(1742236328.523:469): arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffd71743b0 a2=0 a3=1 items=0 ppid=2403 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:08.523000 audit[4787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffd71743b0 a2=0 a3=1 items=0 ppid=2403 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:08.523000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:08.534623 kernel: audit: type=1327 audit(1742236328.523:469): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:08.538000 audit[4787]: NETFILTER_CFG table=nat:114 family=2 entries=20 op=nft_register_rule pid=4787 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:08.538000 audit[4787]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd71743b0 a2=0 a3=1 items=0 ppid=2403 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:08.546120 kernel: audit: type=1325 audit(1742236328.538:470): table=nat:114 family=2 entries=20 op=nft_register_rule pid=4787 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:08.546185 kernel: audit: type=1300 audit(1742236328.538:470): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd71743b0 a2=0 a3=1 items=0 ppid=2403 pid=4787 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:08.546209 kernel: audit: type=1327 audit(1742236328.538:470): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:08.538000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:08.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.124:22-10.0.0.1:35626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:08.736587 systemd[1]: Started sshd@14-10.0.0.124:22-10.0.0.1:35626.service. Mar 17 18:32:08.742798 kernel: audit: type=1130 audit(1742236328.735:471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.124:22-10.0.0.1:35626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:08.774000 audit[4788]: USER_ACCT pid=4788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:08.776175 sshd[4788]: Accepted publickey for core from 10.0.0.1 port 35626 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:32:08.779436 kernel: audit: type=1101 audit(1742236328.774:472): pid=4788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:08.779000 audit[4788]: CRED_ACQ pid=4788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:08.781941 sshd[4788]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:32:08.786355 kernel: audit: type=1103 audit(1742236328.779:473): pid=4788 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:08.786470 kernel: audit: type=1006 audit(1742236328.779:474): pid=4788 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Mar 17 18:32:08.779000 audit[4788]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffeea96210 a2=3 a3=1 items=0 ppid=1 pid=4788 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:08.779000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:08.789288 systemd-logind[1300]: New session 15 of user core. Mar 17 18:32:08.790012 systemd[1]: Started session-15.scope. Mar 17 18:32:08.795000 audit[4788]: USER_START pid=4788 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:08.797000 audit[4791]: CRED_ACQ pid=4791 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:08.969777 sshd[4788]: pam_unix(sshd:session): session closed for user core Mar 17 18:32:08.969000 audit[4788]: USER_END pid=4788 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:08.969000 audit[4788]: CRED_DISP pid=4788 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:08.973064 systemd[1]: sshd@14-10.0.0.124:22-10.0.0.1:35626.service: Deactivated successfully. Mar 17 18:32:08.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.124:22-10.0.0.1:35626 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:08.974077 systemd-logind[1300]: Session 15 logged out. Waiting for processes to exit. Mar 17 18:32:08.974130 systemd[1]: session-15.scope: Deactivated successfully. Mar 17 18:32:08.976475 systemd-logind[1300]: Removed session 15. Mar 17 18:32:09.498932 kubelet[2201]: I0317 18:32:09.498883 2201 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:32:13.973295 systemd[1]: Started sshd@15-10.0.0.124:22-10.0.0.1:38202.service. Mar 17 18:32:13.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.124:22-10.0.0.1:38202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:13.977682 kernel: kauditd_printk_skb: 7 callbacks suppressed Mar 17 18:32:13.977778 kernel: audit: type=1130 audit(1742236333.972:480): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.124:22-10.0.0.1:38202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:14.009000 audit[4804]: USER_ACCT pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.010987 sshd[4804]: Accepted publickey for core from 10.0.0.1 port 38202 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:32:14.012866 sshd[4804]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:32:14.010000 audit[4804]: CRED_ACQ pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.019675 kernel: audit: type=1101 audit(1742236334.009:481): pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.019740 kernel: audit: type=1103 audit(1742236334.010:482): pid=4804 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.022071 kernel: audit: type=1006 audit(1742236334.011:483): pid=4804 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Mar 17 18:32:14.011000 audit[4804]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff802630 a2=3 a3=1 items=0 ppid=1 pid=4804 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:14.027562 kernel: audit: type=1300 audit(1742236334.011:483): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffff802630 a2=3 a3=1 items=0 ppid=1 pid=4804 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:14.011000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:14.029771 kernel: audit: type=1327 audit(1742236334.011:483): proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:14.030348 systemd-logind[1300]: New session 16 of user core. Mar 17 18:32:14.032284 systemd[1]: Started session-16.scope. Mar 17 18:32:14.035000 audit[4804]: USER_START pid=4804 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.039000 audit[4807]: CRED_ACQ pid=4807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.047054 kernel: audit: type=1105 audit(1742236334.035:484): pid=4804 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.047116 kernel: audit: type=1103 audit(1742236334.039:485): pid=4807 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.207229 sshd[4804]: pam_unix(sshd:session): session closed for user core Mar 17 18:32:14.209663 systemd[1]: Started sshd@16-10.0.0.124:22-10.0.0.1:38210.service. Mar 17 18:32:14.207000 audit[4804]: USER_END pid=4804 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.214182 systemd[1]: sshd@15-10.0.0.124:22-10.0.0.1:38202.service: Deactivated successfully. Mar 17 18:32:14.207000 audit[4804]: CRED_DISP pid=4804 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.216499 systemd[1]: session-16.scope: Deactivated successfully. Mar 17 18:32:14.216921 systemd-logind[1300]: Session 16 logged out. Waiting for processes to exit. Mar 17 18:32:14.217372 kernel: audit: type=1106 audit(1742236334.207:486): pid=4804 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.217459 kernel: audit: type=1104 audit(1742236334.207:487): pid=4804 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.124:22-10.0.0.1:38210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:14.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.124:22-10.0.0.1:38202 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:14.218181 systemd-logind[1300]: Removed session 16. Mar 17 18:32:14.241000 audit[4817]: USER_ACCT pid=4817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.243616 sshd[4817]: Accepted publickey for core from 10.0.0.1 port 38210 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:32:14.243000 audit[4817]: CRED_ACQ pid=4817 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.243000 audit[4817]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc7f5d400 a2=3 a3=1 items=0 ppid=1 pid=4817 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:14.243000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:14.245180 sshd[4817]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:32:14.249470 systemd-logind[1300]: New session 17 of user core. Mar 17 18:32:14.249552 systemd[1]: Started session-17.scope. Mar 17 18:32:14.251000 audit[4817]: USER_START pid=4817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.252000 audit[4822]: CRED_ACQ pid=4822 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.276606 env[1316]: time="2025-03-17T18:32:14.276542581Z" level=info msg="StopPodSandbox for \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\"" Mar 17 18:32:14.389469 env[1316]: 2025-03-17 18:32:14.325 [WARNING][4838] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0", GenerateName:"calico-apiserver-f7b5bdcb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4b94281-bded-4df7-a0d8-c157b33b0138", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f7b5bdcb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487", Pod:"calico-apiserver-f7b5bdcb8-jwwvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali665280dfb6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:14.389469 env[1316]: 2025-03-17 18:32:14.326 [INFO][4838] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:32:14.389469 env[1316]: 2025-03-17 18:32:14.326 [INFO][4838] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" iface="eth0" netns="" Mar 17 18:32:14.389469 env[1316]: 2025-03-17 18:32:14.326 [INFO][4838] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:32:14.389469 env[1316]: 2025-03-17 18:32:14.326 [INFO][4838] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:32:14.389469 env[1316]: 2025-03-17 18:32:14.373 [INFO][4852] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" HandleID="k8s-pod-network.cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:14.389469 env[1316]: 2025-03-17 18:32:14.373 [INFO][4852] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:14.389469 env[1316]: 2025-03-17 18:32:14.373 [INFO][4852] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:14.389469 env[1316]: 2025-03-17 18:32:14.385 [WARNING][4852] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" HandleID="k8s-pod-network.cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:14.389469 env[1316]: 2025-03-17 18:32:14.385 [INFO][4852] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" HandleID="k8s-pod-network.cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:14.389469 env[1316]: 2025-03-17 18:32:14.386 [INFO][4852] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:14.389469 env[1316]: 2025-03-17 18:32:14.388 [INFO][4838] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:32:14.390085 env[1316]: time="2025-03-17T18:32:14.389507797Z" level=info msg="TearDown network for sandbox \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\" successfully" Mar 17 18:32:14.390085 env[1316]: time="2025-03-17T18:32:14.389539918Z" level=info msg="StopPodSandbox for \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\" returns successfully" Mar 17 18:32:14.390395 env[1316]: time="2025-03-17T18:32:14.390363765Z" level=info msg="RemovePodSandbox for \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\"" Mar 17 18:32:14.390570 env[1316]: time="2025-03-17T18:32:14.390526527Z" level=info msg="Forcibly stopping sandbox \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\"" Mar 17 18:32:14.487368 env[1316]: 2025-03-17 18:32:14.427 [WARNING][4876] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0", GenerateName:"calico-apiserver-f7b5bdcb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"a4b94281-bded-4df7-a0d8-c157b33b0138", ResourceVersion:"1005", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f7b5bdcb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e62294bc3d18e7d4a9f427b0830f48608bd32b114b492c89dc454998c259d487", Pod:"calico-apiserver-f7b5bdcb8-jwwvk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali665280dfb6c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:14.487368 env[1316]: 2025-03-17 18:32:14.428 [INFO][4876] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:32:14.487368 env[1316]: 2025-03-17 18:32:14.428 [INFO][4876] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" iface="eth0" netns="" Mar 17 18:32:14.487368 env[1316]: 2025-03-17 18:32:14.428 [INFO][4876] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:32:14.487368 env[1316]: 2025-03-17 18:32:14.428 [INFO][4876] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:32:14.487368 env[1316]: 2025-03-17 18:32:14.460 [INFO][4884] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" HandleID="k8s-pod-network.cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:14.487368 env[1316]: 2025-03-17 18:32:14.460 [INFO][4884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:14.487368 env[1316]: 2025-03-17 18:32:14.460 [INFO][4884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:14.487368 env[1316]: 2025-03-17 18:32:14.470 [WARNING][4884] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" HandleID="k8s-pod-network.cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:14.487368 env[1316]: 2025-03-17 18:32:14.470 [INFO][4884] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" HandleID="k8s-pod-network.cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--jwwvk-eth0" Mar 17 18:32:14.487368 env[1316]: 2025-03-17 18:32:14.472 [INFO][4884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:14.487368 env[1316]: 2025-03-17 18:32:14.476 [INFO][4876] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3" Mar 17 18:32:14.487882 env[1316]: time="2025-03-17T18:32:14.487422798Z" level=info msg="TearDown network for sandbox \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\" successfully" Mar 17 18:32:14.495357 env[1316]: time="2025-03-17T18:32:14.495255829Z" level=info msg="RemovePodSandbox \"cbbcd6f84ca0873ff7241e3c0ea3ce66298e745f62363f3fa19bca61307f4ba3\" returns successfully" Mar 17 18:32:14.496489 env[1316]: time="2025-03-17T18:32:14.496460959Z" level=info msg="StopPodSandbox for \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\"" Mar 17 18:32:14.541505 sshd[4817]: pam_unix(sshd:session): session closed for user core Mar 17 18:32:14.540000 audit[4817]: USER_END pid=4817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.541000 audit[4817]: CRED_DISP pid=4817 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.124:22-10.0.0.1:38212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:14.544398 systemd[1]: Started sshd@17-10.0.0.124:22-10.0.0.1:38212.service. Mar 17 18:32:14.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.124:22-10.0.0.1:38210 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:14.548282 systemd[1]: sshd@16-10.0.0.124:22-10.0.0.1:38210.service: Deactivated successfully. Mar 17 18:32:14.549173 systemd[1]: session-17.scope: Deactivated successfully. Mar 17 18:32:14.551150 systemd-logind[1300]: Session 17 logged out. Waiting for processes to exit. Mar 17 18:32:14.552330 systemd-logind[1300]: Removed session 17. Mar 17 18:32:14.588000 audit[4916]: USER_ACCT pid=4916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.590603 sshd[4916]: Accepted publickey for core from 10.0.0.1 port 38212 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:32:14.590000 audit[4916]: CRED_ACQ pid=4916 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.590000 audit[4916]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff4f528b0 a2=3 a3=1 items=0 ppid=1 pid=4916 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:14.590000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:14.592208 sshd[4916]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:32:14.596343 systemd-logind[1300]: New session 18 of user core. Mar 17 18:32:14.597086 systemd[1]: Started session-18.scope. Mar 17 18:32:14.601000 audit[4916]: USER_START pid=4916 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.602000 audit[4929]: CRED_ACQ pid=4929 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:14.604383 env[1316]: 2025-03-17 18:32:14.546 [WARNING][4908] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4d71cd08-81fa-44dc-85ff-58ab8ef8fce9", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5", Pod:"coredns-7db6d8ff4d-rgm7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliceb5ac3e1d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:14.604383 env[1316]: 2025-03-17 18:32:14.546 [INFO][4908] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:32:14.604383 env[1316]: 2025-03-17 18:32:14.546 [INFO][4908] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" iface="eth0" netns="" Mar 17 18:32:14.604383 env[1316]: 2025-03-17 18:32:14.546 [INFO][4908] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:32:14.604383 env[1316]: 2025-03-17 18:32:14.546 [INFO][4908] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:32:14.604383 env[1316]: 2025-03-17 18:32:14.583 [INFO][4917] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" HandleID="k8s-pod-network.cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Workload="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:14.604383 env[1316]: 2025-03-17 18:32:14.583 [INFO][4917] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:14.604383 env[1316]: 2025-03-17 18:32:14.583 [INFO][4917] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:14.604383 env[1316]: 2025-03-17 18:32:14.594 [WARNING][4917] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" HandleID="k8s-pod-network.cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Workload="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:14.604383 env[1316]: 2025-03-17 18:32:14.595 [INFO][4917] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" HandleID="k8s-pod-network.cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Workload="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:14.604383 env[1316]: 2025-03-17 18:32:14.597 [INFO][4917] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:14.604383 env[1316]: 2025-03-17 18:32:14.602 [INFO][4908] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:32:14.604806 env[1316]: time="2025-03-17T18:32:14.604439651Z" level=info msg="TearDown network for sandbox \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\" successfully" Mar 17 18:32:14.604806 env[1316]: time="2025-03-17T18:32:14.604472411Z" level=info msg="StopPodSandbox for \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\" returns successfully" Mar 17 18:32:14.604895 env[1316]: time="2025-03-17T18:32:14.604862535Z" level=info msg="RemovePodSandbox for \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\"" Mar 17 18:32:14.604934 env[1316]: time="2025-03-17T18:32:14.604901455Z" level=info msg="Forcibly stopping sandbox \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\"" Mar 17 18:32:14.672542 env[1316]: 2025-03-17 18:32:14.638 [WARNING][4946] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4d71cd08-81fa-44dc-85ff-58ab8ef8fce9", ResourceVersion:"934", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57c29ed428601b1fa3b559ac514482ab7a4c1de8adfe2140768edf163a441ca5", Pod:"coredns-7db6d8ff4d-rgm7b", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliceb5ac3e1d8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:14.672542 env[1316]: 2025-03-17 18:32:14.639 [INFO][4946] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:32:14.672542 env[1316]: 2025-03-17 18:32:14.639 [INFO][4946] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" iface="eth0" netns="" Mar 17 18:32:14.672542 env[1316]: 2025-03-17 18:32:14.639 [INFO][4946] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:32:14.672542 env[1316]: 2025-03-17 18:32:14.639 [INFO][4946] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:32:14.672542 env[1316]: 2025-03-17 18:32:14.656 [INFO][4954] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" HandleID="k8s-pod-network.cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Workload="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:14.672542 env[1316]: 2025-03-17 18:32:14.656 [INFO][4954] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:14.672542 env[1316]: 2025-03-17 18:32:14.656 [INFO][4954] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:14.672542 env[1316]: 2025-03-17 18:32:14.665 [WARNING][4954] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" HandleID="k8s-pod-network.cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Workload="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:14.672542 env[1316]: 2025-03-17 18:32:14.665 [INFO][4954] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" HandleID="k8s-pod-network.cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Workload="localhost-k8s-coredns--7db6d8ff4d--rgm7b-eth0" Mar 17 18:32:14.672542 env[1316]: 2025-03-17 18:32:14.667 [INFO][4954] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:14.672542 env[1316]: 2025-03-17 18:32:14.671 [INFO][4946] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86" Mar 17 18:32:14.672980 env[1316]: time="2025-03-17T18:32:14.672572864Z" level=info msg="TearDown network for sandbox \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\" successfully" Mar 17 18:32:14.675763 env[1316]: time="2025-03-17T18:32:14.675732652Z" level=info msg="RemovePodSandbox \"cbfad90cc5a7c562f4c9971eba29ec2e62505e04da0628759f8566144580cf86\" returns successfully" Mar 17 18:32:14.676254 env[1316]: time="2025-03-17T18:32:14.676226896Z" level=info msg="StopPodSandbox for \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\"" Mar 17 18:32:14.753955 env[1316]: 2025-03-17 18:32:14.713 [WARNING][4982] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5da61114-c0cd-4682-83c2-1d119dc4cf0e", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae", Pod:"coredns-7db6d8ff4d-blwg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali360d2a0b563", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:14.753955 env[1316]: 2025-03-17 18:32:14.713 [INFO][4982] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:32:14.753955 env[1316]: 2025-03-17 18:32:14.713 [INFO][4982] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" iface="eth0" netns="" Mar 17 18:32:14.753955 env[1316]: 2025-03-17 18:32:14.713 [INFO][4982] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:32:14.753955 env[1316]: 2025-03-17 18:32:14.713 [INFO][4982] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:32:14.753955 env[1316]: 2025-03-17 18:32:14.741 [INFO][4991] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" HandleID="k8s-pod-network.bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Workload="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:14.753955 env[1316]: 2025-03-17 18:32:14.741 [INFO][4991] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:14.753955 env[1316]: 2025-03-17 18:32:14.741 [INFO][4991] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:14.753955 env[1316]: 2025-03-17 18:32:14.750 [WARNING][4991] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" HandleID="k8s-pod-network.bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Workload="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:14.753955 env[1316]: 2025-03-17 18:32:14.750 [INFO][4991] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" HandleID="k8s-pod-network.bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Workload="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:14.753955 env[1316]: 2025-03-17 18:32:14.751 [INFO][4991] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:14.753955 env[1316]: 2025-03-17 18:32:14.752 [INFO][4982] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:32:14.754387 env[1316]: time="2025-03-17T18:32:14.753939516Z" level=info msg="TearDown network for sandbox \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\" successfully" Mar 17 18:32:14.754387 env[1316]: time="2025-03-17T18:32:14.753970436Z" level=info msg="StopPodSandbox for \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\" returns successfully" Mar 17 18:32:14.754798 env[1316]: time="2025-03-17T18:32:14.754769523Z" level=info msg="RemovePodSandbox for \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\"" Mar 17 18:32:14.754939 env[1316]: time="2025-03-17T18:32:14.754901324Z" level=info msg="Forcibly stopping sandbox \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\"" Mar 17 18:32:14.849713 env[1316]: 2025-03-17 18:32:14.809 [WARNING][5014] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"5da61114-c0cd-4682-83c2-1d119dc4cf0e", ResourceVersion:"938", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d0aaaba6d43b1f5c5baf09f194124b4c8bc55aee4258e41cd4ccf136d3e27aae", Pod:"coredns-7db6d8ff4d-blwg5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali360d2a0b563", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:14.849713 env[1316]: 2025-03-17 18:32:14.809 [INFO][5014] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:32:14.849713 env[1316]: 2025-03-17 18:32:14.809 [INFO][5014] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" iface="eth0" netns="" Mar 17 18:32:14.849713 env[1316]: 2025-03-17 18:32:14.809 [INFO][5014] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:32:14.849713 env[1316]: 2025-03-17 18:32:14.809 [INFO][5014] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:32:14.849713 env[1316]: 2025-03-17 18:32:14.836 [INFO][5021] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" HandleID="k8s-pod-network.bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Workload="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:14.849713 env[1316]: 2025-03-17 18:32:14.837 [INFO][5021] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:14.849713 env[1316]: 2025-03-17 18:32:14.837 [INFO][5021] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:14.849713 env[1316]: 2025-03-17 18:32:14.845 [WARNING][5021] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" HandleID="k8s-pod-network.bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Workload="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:14.849713 env[1316]: 2025-03-17 18:32:14.845 [INFO][5021] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" HandleID="k8s-pod-network.bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Workload="localhost-k8s-coredns--7db6d8ff4d--blwg5-eth0" Mar 17 18:32:14.849713 env[1316]: 2025-03-17 18:32:14.847 [INFO][5021] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:14.849713 env[1316]: 2025-03-17 18:32:14.848 [INFO][5014] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd" Mar 17 18:32:14.850133 env[1316]: time="2025-03-17T18:32:14.849727457Z" level=info msg="TearDown network for sandbox \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\" successfully" Mar 17 18:32:14.854905 env[1316]: time="2025-03-17T18:32:14.854867543Z" level=info msg="RemovePodSandbox \"bc427cd03da78b132c7b34e12a0903c4dbb28f5f0d6fc87f5ba363f2a3a51afd\" returns successfully" Mar 17 18:32:14.855372 env[1316]: time="2025-03-17T18:32:14.855342148Z" level=info msg="StopPodSandbox for \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\"" Mar 17 18:32:14.939517 env[1316]: 2025-03-17 18:32:14.898 [WARNING][5043] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0", GenerateName:"calico-apiserver-f7b5bdcb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"30b62e3f-2593-4372-ab8d-126ab81bae75", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f7b5bdcb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77", Pod:"calico-apiserver-f7b5bdcb8-nw9hv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2ee0f7b036", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:14.939517 env[1316]: 2025-03-17 18:32:14.898 [INFO][5043] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:32:14.939517 env[1316]: 2025-03-17 18:32:14.898 [INFO][5043] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" iface="eth0" netns="" Mar 17 18:32:14.939517 env[1316]: 2025-03-17 18:32:14.898 [INFO][5043] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:32:14.939517 env[1316]: 2025-03-17 18:32:14.898 [INFO][5043] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:32:14.939517 env[1316]: 2025-03-17 18:32:14.919 [INFO][5050] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" HandleID="k8s-pod-network.23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:14.939517 env[1316]: 2025-03-17 18:32:14.919 [INFO][5050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:14.939517 env[1316]: 2025-03-17 18:32:14.919 [INFO][5050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:14.939517 env[1316]: 2025-03-17 18:32:14.932 [WARNING][5050] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" HandleID="k8s-pod-network.23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:14.939517 env[1316]: 2025-03-17 18:32:14.932 [INFO][5050] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" HandleID="k8s-pod-network.23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:14.939517 env[1316]: 2025-03-17 18:32:14.933 [INFO][5050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:14.939517 env[1316]: 2025-03-17 18:32:14.935 [INFO][5043] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:32:14.939991 env[1316]: time="2025-03-17T18:32:14.939546545Z" level=info msg="TearDown network for sandbox \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\" successfully" Mar 17 18:32:14.939991 env[1316]: time="2025-03-17T18:32:14.939575065Z" level=info msg="StopPodSandbox for \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\" returns successfully" Mar 17 18:32:14.940589 env[1316]: time="2025-03-17T18:32:14.940550234Z" level=info msg="RemovePodSandbox for \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\"" Mar 17 18:32:14.940738 env[1316]: time="2025-03-17T18:32:14.940701195Z" level=info msg="Forcibly stopping sandbox \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\"" Mar 17 18:32:15.036124 env[1316]: 2025-03-17 18:32:14.996 [WARNING][5075] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0", GenerateName:"calico-apiserver-f7b5bdcb8-", Namespace:"calico-apiserver", SelfLink:"", UID:"30b62e3f-2593-4372-ab8d-126ab81bae75", ResourceVersion:"991", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f7b5bdcb8", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"539fc85ea6b6aceb3836b7e55432e66488382c791e6608270dd1510680ac3f77", Pod:"calico-apiserver-f7b5bdcb8-nw9hv", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib2ee0f7b036", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:15.036124 env[1316]: 2025-03-17 18:32:14.996 [INFO][5075] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:32:15.036124 env[1316]: 2025-03-17 18:32:14.996 [INFO][5075] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" iface="eth0" netns="" Mar 17 18:32:15.036124 env[1316]: 2025-03-17 18:32:14.996 [INFO][5075] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:32:15.036124 env[1316]: 2025-03-17 18:32:14.996 [INFO][5075] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:32:15.036124 env[1316]: 2025-03-17 18:32:15.017 [INFO][5082] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" HandleID="k8s-pod-network.23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:15.036124 env[1316]: 2025-03-17 18:32:15.017 [INFO][5082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:15.036124 env[1316]: 2025-03-17 18:32:15.017 [INFO][5082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:15.036124 env[1316]: 2025-03-17 18:32:15.028 [WARNING][5082] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" HandleID="k8s-pod-network.23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:15.036124 env[1316]: 2025-03-17 18:32:15.028 [INFO][5082] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" HandleID="k8s-pod-network.23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Workload="localhost-k8s-calico--apiserver--f7b5bdcb8--nw9hv-eth0" Mar 17 18:32:15.036124 env[1316]: 2025-03-17 18:32:15.033 [INFO][5082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:15.036124 env[1316]: 2025-03-17 18:32:15.034 [INFO][5075] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b" Mar 17 18:32:15.036124 env[1316]: time="2025-03-17T18:32:15.036092450Z" level=info msg="TearDown network for sandbox \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\" successfully" Mar 17 18:32:15.039217 env[1316]: time="2025-03-17T18:32:15.039183678Z" level=info msg="RemovePodSandbox \"23e31185d5f191f2c39044a782e521409f571d01247ba56eaa47c302c1b10f6b\" returns successfully" Mar 17 18:32:15.039681 env[1316]: time="2025-03-17T18:32:15.039652042Z" level=info msg="StopPodSandbox for \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\"" Mar 17 18:32:15.170008 env[1316]: 2025-03-17 18:32:15.103 [WARNING][5104] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4f7cd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69ba96b0-551d-424c-b677-f69ea1cdb260", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e", Pod:"csi-node-driver-4f7cd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia19cc077760", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:15.170008 env[1316]: 2025-03-17 18:32:15.104 [INFO][5104] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:32:15.170008 env[1316]: 2025-03-17 18:32:15.104 [INFO][5104] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" iface="eth0" netns="" Mar 17 18:32:15.170008 env[1316]: 2025-03-17 18:32:15.104 [INFO][5104] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:32:15.170008 env[1316]: 2025-03-17 18:32:15.104 [INFO][5104] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:32:15.170008 env[1316]: 2025-03-17 18:32:15.129 [INFO][5113] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" HandleID="k8s-pod-network.27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Workload="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:32:15.170008 env[1316]: 2025-03-17 18:32:15.129 [INFO][5113] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:15.170008 env[1316]: 2025-03-17 18:32:15.129 [INFO][5113] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:15.170008 env[1316]: 2025-03-17 18:32:15.140 [WARNING][5113] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" HandleID="k8s-pod-network.27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Workload="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:32:15.170008 env[1316]: 2025-03-17 18:32:15.140 [INFO][5113] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" HandleID="k8s-pod-network.27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Workload="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:32:15.170008 env[1316]: 2025-03-17 18:32:15.142 [INFO][5113] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:15.170008 env[1316]: 2025-03-17 18:32:15.144 [INFO][5104] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:32:15.170462 env[1316]: time="2025-03-17T18:32:15.170035082Z" level=info msg="TearDown network for sandbox \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\" successfully" Mar 17 18:32:15.170462 env[1316]: time="2025-03-17T18:32:15.170064762Z" level=info msg="StopPodSandbox for \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\" returns successfully" Mar 17 18:32:15.174324 env[1316]: time="2025-03-17T18:32:15.174258399Z" level=info msg="RemovePodSandbox for \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\"" Mar 17 18:32:15.174475 env[1316]: time="2025-03-17T18:32:15.174299119Z" level=info msg="Forcibly stopping sandbox \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\"" Mar 17 18:32:15.331362 env[1316]: 2025-03-17 18:32:15.248 [WARNING][5137] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--4f7cd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"69ba96b0-551d-424c-b677-f69ea1cdb260", ResourceVersion:"1008", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"afeb1b85c729a3c80d7a6f082045eb223acb20272a2f4fe4e83e1b62413b808e", Pod:"csi-node-driver-4f7cd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia19cc077760", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:15.331362 env[1316]: 2025-03-17 18:32:15.248 [INFO][5137] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:32:15.331362 env[1316]: 2025-03-17 18:32:15.248 [INFO][5137] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" iface="eth0" netns="" Mar 17 18:32:15.331362 env[1316]: 2025-03-17 18:32:15.248 [INFO][5137] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:32:15.331362 env[1316]: 2025-03-17 18:32:15.248 [INFO][5137] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:32:15.331362 env[1316]: 2025-03-17 18:32:15.313 [INFO][5144] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" HandleID="k8s-pod-network.27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Workload="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:32:15.331362 env[1316]: 2025-03-17 18:32:15.313 [INFO][5144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:15.331362 env[1316]: 2025-03-17 18:32:15.313 [INFO][5144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:15.331362 env[1316]: 2025-03-17 18:32:15.326 [WARNING][5144] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" HandleID="k8s-pod-network.27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Workload="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:32:15.331362 env[1316]: 2025-03-17 18:32:15.326 [INFO][5144] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" HandleID="k8s-pod-network.27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Workload="localhost-k8s-csi--node--driver--4f7cd-eth0" Mar 17 18:32:15.331362 env[1316]: 2025-03-17 18:32:15.328 [INFO][5144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:15.331362 env[1316]: 2025-03-17 18:32:15.330 [INFO][5137] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88" Mar 17 18:32:15.331362 env[1316]: time="2025-03-17T18:32:15.331333756Z" level=info msg="TearDown network for sandbox \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\" successfully" Mar 17 18:32:15.334195 env[1316]: time="2025-03-17T18:32:15.334158581Z" level=info msg="RemovePodSandbox \"27163a1e27c2af007eb2d0dd59e4c3c5e164dc94f965b587edcc62d0cf87cd88\" returns successfully" Mar 17 18:32:15.334639 env[1316]: time="2025-03-17T18:32:15.334606905Z" level=info msg="StopPodSandbox for \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\"" Mar 17 18:32:15.405553 env[1316]: 2025-03-17 18:32:15.371 [WARNING][5167] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0", GenerateName:"calico-kube-controllers-566884d556-", Namespace:"calico-system", SelfLink:"", UID:"dae16c91-a8cb-494a-86d1-5ba60d550a02", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"566884d556", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302", Pod:"calico-kube-controllers-566884d556-6vd4w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb2dee2514a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:15.405553 env[1316]: 2025-03-17 18:32:15.372 [INFO][5167] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:32:15.405553 env[1316]: 2025-03-17 18:32:15.372 [INFO][5167] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" iface="eth0" netns="" Mar 17 18:32:15.405553 env[1316]: 2025-03-17 18:32:15.372 [INFO][5167] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:32:15.405553 env[1316]: 2025-03-17 18:32:15.372 [INFO][5167] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:32:15.405553 env[1316]: 2025-03-17 18:32:15.393 [INFO][5174] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" HandleID="k8s-pod-network.4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Workload="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:15.405553 env[1316]: 2025-03-17 18:32:15.393 [INFO][5174] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:15.405553 env[1316]: 2025-03-17 18:32:15.393 [INFO][5174] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:15.405553 env[1316]: 2025-03-17 18:32:15.401 [WARNING][5174] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" HandleID="k8s-pod-network.4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Workload="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:15.405553 env[1316]: 2025-03-17 18:32:15.401 [INFO][5174] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" HandleID="k8s-pod-network.4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Workload="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:15.405553 env[1316]: 2025-03-17 18:32:15.402 [INFO][5174] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:15.405553 env[1316]: 2025-03-17 18:32:15.404 [INFO][5167] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:32:15.405977 env[1316]: time="2025-03-17T18:32:15.405588097Z" level=info msg="TearDown network for sandbox \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\" successfully" Mar 17 18:32:15.405977 env[1316]: time="2025-03-17T18:32:15.405618697Z" level=info msg="StopPodSandbox for \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\" returns successfully" Mar 17 18:32:15.406165 env[1316]: time="2025-03-17T18:32:15.406129062Z" level=info msg="RemovePodSandbox for \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\"" Mar 17 18:32:15.406207 env[1316]: time="2025-03-17T18:32:15.406175062Z" level=info msg="Forcibly stopping sandbox \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\"" Mar 17 18:32:15.497773 env[1316]: 2025-03-17 18:32:15.454 [WARNING][5196] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0", GenerateName:"calico-kube-controllers-566884d556-", Namespace:"calico-system", SelfLink:"", UID:"dae16c91-a8cb-494a-86d1-5ba60d550a02", ResourceVersion:"965", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 18, 31, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"566884d556", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"378d480e154722575b8700b61399d9bec43de2ff48193d600ec4cf714161b302", Pod:"calico-kube-controllers-566884d556-6vd4w", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calibb2dee2514a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 18:32:15.497773 env[1316]: 2025-03-17 18:32:15.454 [INFO][5196] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:32:15.497773 env[1316]: 2025-03-17 18:32:15.454 [INFO][5196] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" iface="eth0" netns="" Mar 17 18:32:15.497773 env[1316]: 2025-03-17 18:32:15.455 [INFO][5196] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:32:15.497773 env[1316]: 2025-03-17 18:32:15.455 [INFO][5196] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:32:15.497773 env[1316]: 2025-03-17 18:32:15.485 [INFO][5203] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" HandleID="k8s-pod-network.4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Workload="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:15.497773 env[1316]: 2025-03-17 18:32:15.485 [INFO][5203] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 18:32:15.497773 env[1316]: 2025-03-17 18:32:15.485 [INFO][5203] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 18:32:15.497773 env[1316]: 2025-03-17 18:32:15.493 [WARNING][5203] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" HandleID="k8s-pod-network.4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Workload="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:15.497773 env[1316]: 2025-03-17 18:32:15.493 [INFO][5203] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" HandleID="k8s-pod-network.4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Workload="localhost-k8s-calico--kube--controllers--566884d556--6vd4w-eth0" Mar 17 18:32:15.497773 env[1316]: 2025-03-17 18:32:15.495 [INFO][5203] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 18:32:15.497773 env[1316]: 2025-03-17 18:32:15.496 [INFO][5196] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7" Mar 17 18:32:15.498208 env[1316]: time="2025-03-17T18:32:15.497791557Z" level=info msg="TearDown network for sandbox \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\" successfully" Mar 17 18:32:15.500396 env[1316]: time="2025-03-17T18:32:15.500365780Z" level=info msg="RemovePodSandbox \"4116c8dac2025944b54713d7418f8c7969af0b0ec4948daea234a7f6351831b7\" returns successfully" Mar 17 18:32:16.149000 audit[5212]: NETFILTER_CFG table=filter:115 family=2 entries=22 op=nft_register_rule pid=5212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:16.149000 audit[5212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12604 a0=3 a1=ffffdbdfddc0 a2=0 a3=1 items=0 ppid=2403 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:16.149000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:16.157474 sshd[4916]: pam_unix(sshd:session): session closed for user core Mar 17 18:32:16.157000 audit[5212]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=5212 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:16.157000 audit[5212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffdbdfddc0 a2=0 a3=1 items=0 ppid=2403 pid=5212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:16.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:16.157000 audit[4916]: USER_END pid=4916 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.158000 audit[4916]: CRED_DISP pid=4916 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.124:22-10.0.0.1:38222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:16.160312 systemd[1]: Started sshd@18-10.0.0.124:22-10.0.0.1:38222.service. Mar 17 18:32:16.160844 systemd[1]: sshd@17-10.0.0.124:22-10.0.0.1:38212.service: Deactivated successfully. Mar 17 18:32:16.159000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.124:22-10.0.0.1:38212 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:16.162356 systemd-logind[1300]: Session 18 logged out. Waiting for processes to exit. Mar 17 18:32:16.162426 systemd[1]: session-18.scope: Deactivated successfully. Mar 17 18:32:16.163866 systemd-logind[1300]: Removed session 18. Mar 17 18:32:16.197000 audit[5235]: NETFILTER_CFG table=filter:117 family=2 entries=34 op=nft_register_rule pid=5235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:16.197000 audit[5235]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=12604 a0=3 a1=ffffddce42d0 a2=0 a3=1 items=0 ppid=2403 pid=5235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:16.197000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:16.203000 audit[5235]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=5235 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:16.203000 audit[5235]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffddce42d0 a2=0 a3=1 items=0 ppid=2403 pid=5235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:16.203000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:16.210000 audit[5214]: USER_ACCT pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.212279 sshd[5214]: Accepted publickey for core from 10.0.0.1 port 38222 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:32:16.211000 audit[5214]: CRED_ACQ pid=5214 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.212000 audit[5214]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd75e8aa0 a2=3 a3=1 items=0 ppid=1 pid=5214 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:16.212000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:16.213919 sshd[5214]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:32:16.217726 systemd-logind[1300]: New session 19 of user core. Mar 17 18:32:16.218098 systemd[1]: Started session-19.scope. Mar 17 18:32:16.220000 audit[5214]: USER_START pid=5214 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.222000 audit[5241]: CRED_ACQ pid=5241 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.124:22-10.0.0.1:38238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:16.468923 systemd[1]: Started sshd@19-10.0.0.124:22-10.0.0.1:38238.service. Mar 17 18:32:16.473587 sshd[5214]: pam_unix(sshd:session): session closed for user core Mar 17 18:32:16.474000 audit[5214]: USER_END pid=5214 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.474000 audit[5214]: CRED_DISP pid=5214 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.477515 systemd[1]: sshd@18-10.0.0.124:22-10.0.0.1:38222.service: Deactivated successfully. Mar 17 18:32:16.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.124:22-10.0.0.1:38222 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:16.478960 systemd[1]: session-19.scope: Deactivated successfully. Mar 17 18:32:16.479638 systemd-logind[1300]: Session 19 logged out. Waiting for processes to exit. Mar 17 18:32:16.480785 systemd-logind[1300]: Removed session 19. Mar 17 18:32:16.503000 audit[5248]: USER_ACCT pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.505381 sshd[5248]: Accepted publickey for core from 10.0.0.1 port 38238 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:32:16.504000 audit[5248]: CRED_ACQ pid=5248 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.504000 audit[5248]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcd3e6340 a2=3 a3=1 items=0 ppid=1 pid=5248 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:16.504000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:16.506609 sshd[5248]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:32:16.510722 systemd[1]: Started session-20.scope. Mar 17 18:32:16.511051 systemd-logind[1300]: New session 20 of user core. Mar 17 18:32:16.518000 audit[5248]: USER_START pid=5248 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.524000 audit[5253]: CRED_ACQ pid=5253 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.642674 sshd[5248]: pam_unix(sshd:session): session closed for user core Mar 17 18:32:16.642000 audit[5248]: USER_END pid=5248 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.642000 audit[5248]: CRED_DISP pid=5248 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:16.645384 systemd-logind[1300]: Session 20 logged out. Waiting for processes to exit. Mar 17 18:32:16.645672 systemd[1]: sshd@19-10.0.0.124:22-10.0.0.1:38238.service: Deactivated successfully. Mar 17 18:32:16.644000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.124:22-10.0.0.1:38238 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:16.646480 systemd[1]: session-20.scope: Deactivated successfully. Mar 17 18:32:16.647549 systemd-logind[1300]: Removed session 20. Mar 17 18:32:19.691589 kubelet[2201]: I0317 18:32:19.691537 2201 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:32:19.726000 audit[5271]: NETFILTER_CFG table=filter:119 family=2 entries=33 op=nft_register_rule pid=5271 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:19.730965 kernel: kauditd_printk_skb: 57 callbacks suppressed Mar 17 18:32:19.731046 kernel: audit: type=1325 audit(1742236339.726:529): table=filter:119 family=2 entries=33 op=nft_register_rule pid=5271 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:19.731071 kernel: audit: type=1300 audit(1742236339.726:529): arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffd60b3c60 a2=0 a3=1 items=0 ppid=2403 pid=5271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:19.726000 audit[5271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffd60b3c60 a2=0 a3=1 items=0 ppid=2403 pid=5271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:19.726000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:19.736393 kernel: audit: type=1327 audit(1742236339.726:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:19.735000 audit[5271]: NETFILTER_CFG table=nat:120 family=2 entries=27 op=nft_register_chain pid=5271 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:19.735000 audit[5271]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffd60b3c60 a2=0 a3=1 items=0 ppid=2403 pid=5271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:19.742630 kernel: audit: type=1325 audit(1742236339.735:530): table=nat:120 family=2 entries=27 op=nft_register_chain pid=5271 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:19.742675 kernel: audit: type=1300 audit(1742236339.735:530): arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffd60b3c60 a2=0 a3=1 items=0 ppid=2403 pid=5271 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:19.742699 kernel: audit: type=1327 audit(1742236339.735:530): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:19.735000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:21.140776 systemd[1]: run-containerd-runc-k8s.io-910fe46f65a335b82d6a3a8a1cc8bccb3d582b7bac1c07cfabc18c2168da1f8f-runc.XSkV8E.mount: Deactivated successfully. Mar 17 18:32:21.175000 audit[5292]: NETFILTER_CFG table=filter:121 family=2 entries=20 op=nft_register_rule pid=5292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:21.175000 audit[5292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffdc670b20 a2=0 a3=1 items=0 ppid=2403 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:21.182992 kernel: audit: type=1325 audit(1742236341.175:531): table=filter:121 family=2 entries=20 op=nft_register_rule pid=5292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:21.183036 kernel: audit: type=1300 audit(1742236341.175:531): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffdc670b20 a2=0 a3=1 items=0 ppid=2403 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:21.183071 kernel: audit: type=1327 audit(1742236341.175:531): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:21.175000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:21.185000 audit[5292]: NETFILTER_CFG table=nat:122 family=2 entries=106 op=nft_register_chain pid=5292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:21.185000 audit[5292]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffdc670b20 a2=0 a3=1 items=0 ppid=2403 pid=5292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:21.185000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:21.189431 kernel: audit: type=1325 audit(1742236341.185:532): table=nat:122 family=2 entries=106 op=nft_register_chain pid=5292 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:21.645517 systemd[1]: Started sshd@20-10.0.0.124:22-10.0.0.1:38244.service. Mar 17 18:32:21.644000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.124:22-10.0.0.1:38244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:21.681000 audit[5297]: USER_ACCT pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:21.682654 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 38244 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:32:21.682000 audit[5297]: CRED_ACQ pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:21.682000 audit[5297]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd0dc13b0 a2=3 a3=1 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:21.682000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:21.684237 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:32:21.687435 systemd-logind[1300]: New session 21 of user core. Mar 17 18:32:21.688198 systemd[1]: Started session-21.scope. Mar 17 18:32:21.690000 audit[5297]: USER_START pid=5297 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:21.691000 audit[5300]: CRED_ACQ pid=5300 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:21.810990 sshd[5297]: pam_unix(sshd:session): session closed for user core Mar 17 18:32:21.810000 audit[5297]: USER_END pid=5297 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:21.810000 audit[5297]: CRED_DISP pid=5297 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:21.813603 systemd[1]: sshd@20-10.0.0.124:22-10.0.0.1:38244.service: Deactivated successfully. Mar 17 18:32:21.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.124:22-10.0.0.1:38244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:21.814578 systemd-logind[1300]: Session 21 logged out. Waiting for processes to exit. Mar 17 18:32:21.814619 systemd[1]: session-21.scope: Deactivated successfully. Mar 17 18:32:21.815431 systemd-logind[1300]: Removed session 21. Mar 17 18:32:23.916560 kubelet[2201]: I0317 18:32:23.916490 2201 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 18:32:23.949000 audit[5312]: NETFILTER_CFG table=filter:123 family=2 entries=8 op=nft_register_rule pid=5312 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:23.949000 audit[5312]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffcaf53250 a2=0 a3=1 items=0 ppid=2403 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:23.949000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:23.960000 audit[5312]: NETFILTER_CFG table=nat:124 family=2 entries=58 op=nft_register_chain pid=5312 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Mar 17 18:32:23.960000 audit[5312]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20452 a0=3 a1=ffffcaf53250 a2=0 a3=1 items=0 ppid=2403 pid=5312 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:23.960000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Mar 17 18:32:26.814330 systemd[1]: Started sshd@21-10.0.0.124:22-10.0.0.1:53352.service. Mar 17 18:32:26.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.124:22-10.0.0.1:53352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:26.817990 kernel: kauditd_printk_skb: 19 callbacks suppressed Mar 17 18:32:26.818052 kernel: audit: type=1130 audit(1742236346.813:544): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.124:22-10.0.0.1:53352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:26.845000 audit[5313]: USER_ACCT pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:26.847244 sshd[5313]: Accepted publickey for core from 10.0.0.1 port 53352 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:32:26.848601 sshd[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:32:26.847000 audit[5313]: CRED_ACQ pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:26.852254 systemd-logind[1300]: New session 22 of user core. Mar 17 18:32:26.852645 systemd[1]: Started session-22.scope. Mar 17 18:32:26.853038 kernel: audit: type=1101 audit(1742236346.845:545): pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:26.853067 kernel: audit: type=1103 audit(1742236346.847:546): pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:26.853093 kernel: audit: type=1006 audit(1742236346.847:547): pid=5313 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Mar 17 18:32:26.847000 audit[5313]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc05df680 a2=3 a3=1 items=0 ppid=1 pid=5313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:26.857975 kernel: audit: type=1300 audit(1742236346.847:547): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc05df680 a2=3 a3=1 items=0 ppid=1 pid=5313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:26.858032 kernel: audit: type=1327 audit(1742236346.847:547): proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:26.847000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:26.858000 audit[5313]: USER_START pid=5313 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:26.862467 kernel: audit: type=1105 audit(1742236346.858:548): pid=5313 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:26.862513 kernel: audit: type=1103 audit(1742236346.858:549): pid=5316 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:26.858000 audit[5316]: CRED_ACQ pid=5316 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:26.972821 sshd[5313]: pam_unix(sshd:session): session closed for user core Mar 17 18:32:26.972000 audit[5313]: USER_END pid=5313 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:26.975456 systemd-logind[1300]: Session 22 logged out. Waiting for processes to exit. Mar 17 18:32:26.975580 systemd[1]: sshd@21-10.0.0.124:22-10.0.0.1:53352.service: Deactivated successfully. Mar 17 18:32:26.976428 systemd[1]: session-22.scope: Deactivated successfully. Mar 17 18:32:26.976887 systemd-logind[1300]: Removed session 22. Mar 17 18:32:26.972000 audit[5313]: CRED_DISP pid=5313 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:26.980345 kernel: audit: type=1106 audit(1742236346.972:550): pid=5313 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:26.980421 kernel: audit: type=1104 audit(1742236346.972:551): pid=5313 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:26.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.124:22-10.0.0.1:53352 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:27.311106 kubelet[2201]: E0317 18:32:27.311069 2201 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Mar 17 18:32:31.976785 systemd[1]: Started sshd@22-10.0.0.124:22-10.0.0.1:53358.service. Mar 17 18:32:31.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.124:22-10.0.0.1:53358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:31.977825 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:32:31.977884 kernel: audit: type=1130 audit(1742236351.976:553): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.124:22-10.0.0.1:53358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:32.009000 audit[5329]: USER_ACCT pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:32.010111 sshd[5329]: Accepted publickey for core from 10.0.0.1 port 53358 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:32:32.011689 sshd[5329]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:32:32.010000 audit[5329]: CRED_ACQ pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:32.015326 systemd-logind[1300]: New session 23 of user core. Mar 17 18:32:32.015800 systemd[1]: Started session-23.scope. Mar 17 18:32:32.016335 kernel: audit: type=1101 audit(1742236352.009:554): pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:32.016386 kernel: audit: type=1103 audit(1742236352.010:555): pid=5329 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:32.016417 kernel: audit: type=1006 audit(1742236352.010:556): pid=5329 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Mar 17 18:32:32.018025 kernel: audit: type=1300 audit(1742236352.010:556): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe6822c30 a2=3 a3=1 items=0 ppid=1 pid=5329 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:32.010000 audit[5329]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe6822c30 a2=3 a3=1 items=0 ppid=1 pid=5329 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:32.010000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:32.022231 kernel: audit: type=1327 audit(1742236352.010:556): proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:32.022000 audit[5329]: USER_START pid=5329 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:32.026797 kernel: audit: type=1105 audit(1742236352.022:557): pid=5329 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:32.026853 kernel: audit: type=1103 audit(1742236352.023:558): pid=5332 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:32.023000 audit[5332]: CRED_ACQ pid=5332 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:32.136575 sshd[5329]: pam_unix(sshd:session): session closed for user core Mar 17 18:32:32.137000 audit[5329]: USER_END pid=5329 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:32.137000 audit[5329]: CRED_DISP pid=5329 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:32.140249 systemd-logind[1300]: Session 23 logged out. Waiting for processes to exit. Mar 17 18:32:32.140386 systemd[1]: sshd@22-10.0.0.124:22-10.0.0.1:53358.service: Deactivated successfully. Mar 17 18:32:32.141244 systemd[1]: session-23.scope: Deactivated successfully. Mar 17 18:32:32.141716 systemd-logind[1300]: Removed session 23. Mar 17 18:32:32.143992 kernel: audit: type=1106 audit(1742236352.137:559): pid=5329 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:32.144074 kernel: audit: type=1104 audit(1742236352.137:560): pid=5329 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:32.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.124:22-10.0.0.1:53358 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:37.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.124:22-10.0.0.1:39120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:37.139691 systemd[1]: Started sshd@23-10.0.0.124:22-10.0.0.1:39120.service. Mar 17 18:32:37.140755 kernel: kauditd_printk_skb: 1 callbacks suppressed Mar 17 18:32:37.140856 kernel: audit: type=1130 audit(1742236357.139:562): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.124:22-10.0.0.1:39120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Mar 17 18:32:37.172000 audit[5367]: USER_ACCT pid=5367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:37.173129 sshd[5367]: Accepted publickey for core from 10.0.0.1 port 39120 ssh2: RSA SHA256:hoQCPKafrT/V1URQ18ch5K7mLY85DMM2OIJJf47c8zQ Mar 17 18:32:37.174191 sshd[5367]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Mar 17 18:32:37.173000 audit[5367]: CRED_ACQ pid=5367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:37.178013 systemd-logind[1300]: New session 24 of user core. Mar 17 18:32:37.178450 systemd[1]: Started session-24.scope. Mar 17 18:32:37.179057 kernel: audit: type=1101 audit(1742236357.172:563): pid=5367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:37.179108 kernel: audit: type=1103 audit(1742236357.173:564): pid=5367 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:37.181173 kernel: audit: type=1006 audit(1742236357.173:565): pid=5367 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Mar 17 18:32:37.181240 kernel: audit: type=1300 audit(1742236357.173:565): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc8d8eb0 a2=3 a3=1 items=0 ppid=1 pid=5367 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:37.173000 audit[5367]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcc8d8eb0 a2=3 a3=1 items=0 ppid=1 pid=5367 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Mar 17 18:32:37.173000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:37.185727 kernel: audit: type=1327 audit(1742236357.173:565): proctitle=737368643A20636F7265205B707269765D Mar 17 18:32:37.184000 audit[5367]: USER_START pid=5367 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:37.189252 kernel: audit: type=1105 audit(1742236357.184:566): pid=5367 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:37.189320 kernel: audit: type=1103 audit(1742236357.185:567): pid=5370 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:37.185000 audit[5370]: CRED_ACQ pid=5370 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:37.294667 sshd[5367]: pam_unix(sshd:session): session closed for user core Mar 17 18:32:37.295000 audit[5367]: USER_END pid=5367 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:37.296982 systemd[1]: sshd@23-10.0.0.124:22-10.0.0.1:39120.service: Deactivated successfully. Mar 17 18:32:37.298061 systemd[1]: session-24.scope: Deactivated successfully. Mar 17 18:32:37.298075 systemd-logind[1300]: Session 24 logged out. Waiting for processes to exit. Mar 17 18:32:37.299094 systemd-logind[1300]: Removed session 24. Mar 17 18:32:37.295000 audit[5367]: CRED_DISP pid=5367 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:37.302169 kernel: audit: type=1106 audit(1742236357.295:568): pid=5367 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:37.302240 kernel: audit: type=1104 audit(1742236357.295:569): pid=5367 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Mar 17 18:32:37.296000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-10.0.0.124:22-10.0.0.1:39120 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'