Dec 13 14:20:53.722756 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Dec 13 14:20:53.722777 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Fri Dec 13 12:58:58 -00 2024 Dec 13 14:20:53.722785 kernel: efi: EFI v2.70 by EDK II Dec 13 14:20:53.722790 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Dec 13 14:20:53.722795 kernel: random: crng init done Dec 13 14:20:53.722801 kernel: ACPI: Early table checksum verification disabled Dec 13 14:20:53.722807 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Dec 13 14:20:53.722814 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Dec 13 14:20:53.722819 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:20:53.722824 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:20:53.722880 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:20:53.722889 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:20:53.722894 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:20:53.722900 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:20:53.722909 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:20:53.722915 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:20:53.722921 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 14:20:53.722926 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Dec 13 14:20:53.722932 kernel: NUMA: Failed to initialise from firmware Dec 13 14:20:53.723005 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:20:53.723127 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] Dec 13 14:20:53.723135 kernel: Zone ranges: Dec 13 14:20:53.723140 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:20:53.723151 kernel: DMA32 empty Dec 13 14:20:53.723157 kernel: Normal empty Dec 13 14:20:53.723162 kernel: Movable zone start for each node Dec 13 14:20:53.723168 kernel: Early memory node ranges Dec 13 14:20:53.723182 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Dec 13 14:20:53.723188 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Dec 13 14:20:53.723194 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Dec 13 14:20:53.723200 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Dec 13 14:20:53.723205 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Dec 13 14:20:53.723211 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Dec 13 14:20:53.723217 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Dec 13 14:20:53.723222 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Dec 13 14:20:53.723230 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Dec 13 14:20:53.723235 kernel: psci: probing for conduit method from ACPI. Dec 13 14:20:53.723241 kernel: psci: PSCIv1.1 detected in firmware. Dec 13 14:20:53.723247 kernel: psci: Using standard PSCI v0.2 function IDs Dec 13 14:20:53.723252 kernel: psci: Trusted OS migration not required Dec 13 14:20:53.723260 kernel: psci: SMC Calling Convention v1.1 Dec 13 14:20:53.723267 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Dec 13 14:20:53.723274 kernel: ACPI: SRAT not present Dec 13 14:20:53.723280 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 Dec 13 14:20:53.723286 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 Dec 13 14:20:53.723293 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Dec 13 14:20:53.723299 kernel: Detected PIPT I-cache on CPU0 Dec 13 14:20:53.723305 kernel: CPU features: detected: GIC system register CPU interface Dec 13 14:20:53.723311 kernel: CPU features: detected: Hardware dirty bit management Dec 13 14:20:53.723317 kernel: CPU features: detected: Spectre-v4 Dec 13 14:20:53.723323 kernel: CPU features: detected: Spectre-BHB Dec 13 14:20:53.723330 kernel: CPU features: kernel page table isolation forced ON by KASLR Dec 13 14:20:53.723336 kernel: CPU features: detected: Kernel page table isolation (KPTI) Dec 13 14:20:53.723343 kernel: CPU features: detected: ARM erratum 1418040 Dec 13 14:20:53.723349 kernel: CPU features: detected: SSBS not fully self-synchronizing Dec 13 14:20:53.723355 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Dec 13 14:20:53.723361 kernel: Policy zone: DMA Dec 13 14:20:53.723368 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:20:53.723375 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 14:20:53.723381 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 14:20:53.723387 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 14:20:53.723394 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 14:20:53.723401 kernel: Memory: 2457404K/2572288K available (9792K kernel code, 2092K rwdata, 7576K rodata, 36416K init, 777K bss, 114884K reserved, 0K cma-reserved) Dec 13 14:20:53.723407 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 14:20:53.723414 kernel: trace event string verifier disabled Dec 13 14:20:53.723420 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 14:20:53.723426 kernel: rcu: RCU event tracing is enabled. Dec 13 14:20:53.723432 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 14:20:53.723439 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 14:20:53.723445 kernel: Tracing variant of Tasks RCU enabled. Dec 13 14:20:53.723451 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 14:20:53.723457 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 14:20:53.723463 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Dec 13 14:20:53.723470 kernel: GICv3: 256 SPIs implemented Dec 13 14:20:53.723476 kernel: GICv3: 0 Extended SPIs implemented Dec 13 14:20:53.723483 kernel: GICv3: Distributor has no Range Selector support Dec 13 14:20:53.723488 kernel: Root IRQ handler: gic_handle_irq Dec 13 14:20:53.723494 kernel: GICv3: 16 PPIs implemented Dec 13 14:20:53.723501 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Dec 13 14:20:53.723507 kernel: ACPI: SRAT not present Dec 13 14:20:53.723513 kernel: ITS [mem 0x08080000-0x0809ffff] Dec 13 14:20:53.723519 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Dec 13 14:20:53.723525 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Dec 13 14:20:53.723531 kernel: GICv3: using LPI property table @0x00000000400d0000 Dec 13 14:20:53.723537 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Dec 13 14:20:53.723545 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:20:53.723551 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Dec 13 14:20:53.723557 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Dec 13 14:20:53.723563 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Dec 13 14:20:53.723569 kernel: arm-pv: using stolen time PV Dec 13 14:20:53.723575 kernel: Console: colour dummy device 80x25 Dec 13 14:20:53.723582 kernel: ACPI: Core revision 20210730 Dec 13 14:20:53.723588 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Dec 13 14:20:53.723594 kernel: pid_max: default: 32768 minimum: 301 Dec 13 14:20:53.723601 kernel: LSM: Security Framework initializing Dec 13 14:20:53.723608 kernel: SELinux: Initializing. Dec 13 14:20:53.723615 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:20:53.723621 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 14:20:53.723627 kernel: rcu: Hierarchical SRCU implementation. Dec 13 14:20:53.723633 kernel: Platform MSI: ITS@0x8080000 domain created Dec 13 14:20:53.723640 kernel: PCI/MSI: ITS@0x8080000 domain created Dec 13 14:20:53.723646 kernel: Remapping and enabling EFI services. Dec 13 14:20:53.723652 kernel: smp: Bringing up secondary CPUs ... Dec 13 14:20:53.723658 kernel: Detected PIPT I-cache on CPU1 Dec 13 14:20:53.723666 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Dec 13 14:20:53.723672 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Dec 13 14:20:53.723678 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:20:53.723684 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Dec 13 14:20:53.723691 kernel: Detected PIPT I-cache on CPU2 Dec 13 14:20:53.723697 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Dec 13 14:20:53.723703 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Dec 13 14:20:53.723710 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:20:53.723716 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Dec 13 14:20:53.723722 kernel: Detected PIPT I-cache on CPU3 Dec 13 14:20:53.723729 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Dec 13 14:20:53.723736 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Dec 13 14:20:53.723742 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Dec 13 14:20:53.723748 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Dec 13 14:20:53.723758 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 14:20:53.723766 kernel: SMP: Total of 4 processors activated. Dec 13 14:20:53.723773 kernel: CPU features: detected: 32-bit EL0 Support Dec 13 14:20:53.723780 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Dec 13 14:20:53.723786 kernel: CPU features: detected: Common not Private translations Dec 13 14:20:53.723793 kernel: CPU features: detected: CRC32 instructions Dec 13 14:20:53.723799 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Dec 13 14:20:53.723806 kernel: CPU features: detected: LSE atomic instructions Dec 13 14:20:53.723814 kernel: CPU features: detected: Privileged Access Never Dec 13 14:20:53.723820 kernel: CPU features: detected: RAS Extension Support Dec 13 14:20:53.723827 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Dec 13 14:20:53.723833 kernel: CPU: All CPU(s) started at EL1 Dec 13 14:20:53.723840 kernel: alternatives: patching kernel code Dec 13 14:20:53.723883 kernel: devtmpfs: initialized Dec 13 14:20:53.723889 kernel: KASLR enabled Dec 13 14:20:53.723896 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 14:20:53.723903 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 14:20:53.723953 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 14:20:53.723961 kernel: SMBIOS 3.0.0 present. Dec 13 14:20:53.723967 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Dec 13 14:20:53.723974 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 14:20:53.723980 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Dec 13 14:20:53.723989 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Dec 13 14:20:53.723996 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Dec 13 14:20:53.724003 kernel: audit: initializing netlink subsys (disabled) Dec 13 14:20:53.724009 kernel: audit: type=2000 audit(0.030:1): state=initialized audit_enabled=0 res=1 Dec 13 14:20:53.724016 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 14:20:53.724022 kernel: cpuidle: using governor menu Dec 13 14:20:53.724029 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Dec 13 14:20:53.724036 kernel: ASID allocator initialised with 32768 entries Dec 13 14:20:53.724042 kernel: ACPI: bus type PCI registered Dec 13 14:20:53.724050 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 14:20:53.724057 kernel: Serial: AMBA PL011 UART driver Dec 13 14:20:53.724063 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 14:20:53.724070 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Dec 13 14:20:53.724080 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 14:20:53.724087 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Dec 13 14:20:53.724093 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 14:20:53.724100 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Dec 13 14:20:53.724108 kernel: ACPI: Added _OSI(Module Device) Dec 13 14:20:53.724117 kernel: ACPI: Added _OSI(Processor Device) Dec 13 14:20:53.724124 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 14:20:53.724130 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 14:20:53.724138 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 14:20:53.724146 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 14:20:53.724153 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 14:20:53.724159 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 14:20:53.724168 kernel: ACPI: Interpreter enabled Dec 13 14:20:53.724181 kernel: ACPI: Using GIC for interrupt routing Dec 13 14:20:53.724190 kernel: ACPI: MCFG table detected, 1 entries Dec 13 14:20:53.724197 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Dec 13 14:20:53.724203 kernel: printk: console [ttyAMA0] enabled Dec 13 14:20:53.724210 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 14:20:53.724356 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 14:20:53.724419 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 14:20:53.724476 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 14:20:53.724535 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Dec 13 14:20:53.724592 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Dec 13 14:20:53.724601 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Dec 13 14:20:53.724608 kernel: PCI host bridge to bus 0000:00 Dec 13 14:20:53.724671 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Dec 13 14:20:53.724724 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Dec 13 14:20:53.724775 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Dec 13 14:20:53.724826 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 14:20:53.724926 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Dec 13 14:20:53.724998 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 14:20:53.725068 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Dec 13 14:20:53.725128 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Dec 13 14:20:53.725194 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:20:53.725253 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Dec 13 14:20:53.725315 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Dec 13 14:20:53.725372 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Dec 13 14:20:53.725424 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Dec 13 14:20:53.725479 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Dec 13 14:20:53.725530 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Dec 13 14:20:53.725539 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Dec 13 14:20:53.725546 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Dec 13 14:20:53.725552 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Dec 13 14:20:53.725561 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Dec 13 14:20:53.725567 kernel: iommu: Default domain type: Translated Dec 13 14:20:53.725574 kernel: iommu: DMA domain TLB invalidation policy: strict mode Dec 13 14:20:53.725580 kernel: vgaarb: loaded Dec 13 14:20:53.725587 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 14:20:53.725594 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 14:20:53.725600 kernel: PTP clock support registered Dec 13 14:20:53.725607 kernel: Registered efivars operations Dec 13 14:20:53.725613 kernel: clocksource: Switched to clocksource arch_sys_counter Dec 13 14:20:53.725621 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 14:20:53.725632 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 14:20:53.725639 kernel: pnp: PnP ACPI init Dec 13 14:20:53.725712 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Dec 13 14:20:53.725721 kernel: pnp: PnP ACPI: found 1 devices Dec 13 14:20:53.725728 kernel: NET: Registered PF_INET protocol family Dec 13 14:20:53.725735 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 14:20:53.725742 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 14:20:53.725750 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 14:20:53.725757 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 14:20:53.725764 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Dec 13 14:20:53.725770 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 14:20:53.725777 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:20:53.725784 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 14:20:53.725790 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 14:20:53.725797 kernel: PCI: CLS 0 bytes, default 64 Dec 13 14:20:53.725804 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Dec 13 14:20:53.725811 kernel: kvm [1]: HYP mode not available Dec 13 14:20:53.725818 kernel: Initialise system trusted keyrings Dec 13 14:20:53.725824 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 14:20:53.725831 kernel: Key type asymmetric registered Dec 13 14:20:53.725837 kernel: Asymmetric key parser 'x509' registered Dec 13 14:20:53.725878 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 14:20:53.725914 kernel: io scheduler mq-deadline registered Dec 13 14:20:53.725921 kernel: io scheduler kyber registered Dec 13 14:20:53.725927 kernel: io scheduler bfq registered Dec 13 14:20:53.725936 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Dec 13 14:20:53.725943 kernel: ACPI: button: Power Button [PWRB] Dec 13 14:20:53.725950 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Dec 13 14:20:53.726022 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Dec 13 14:20:53.726031 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 14:20:53.726038 kernel: thunder_xcv, ver 1.0 Dec 13 14:20:53.726044 kernel: thunder_bgx, ver 1.0 Dec 13 14:20:53.726051 kernel: nicpf, ver 1.0 Dec 13 14:20:53.726057 kernel: nicvf, ver 1.0 Dec 13 14:20:53.726124 kernel: rtc-efi rtc-efi.0: registered as rtc0 Dec 13 14:20:53.726188 kernel: rtc-efi rtc-efi.0: setting system clock to 2024-12-13T14:20:53 UTC (1734099653) Dec 13 14:20:53.726197 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 14:20:53.726204 kernel: NET: Registered PF_INET6 protocol family Dec 13 14:20:53.726211 kernel: Segment Routing with IPv6 Dec 13 14:20:53.726217 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 14:20:53.726224 kernel: NET: Registered PF_PACKET protocol family Dec 13 14:20:53.726230 kernel: Key type dns_resolver registered Dec 13 14:20:53.726239 kernel: registered taskstats version 1 Dec 13 14:20:53.726245 kernel: Loading compiled-in X.509 certificates Dec 13 14:20:53.726252 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: e011ba9949ade5a6d03f7a5e28171f7f59e70f8a' Dec 13 14:20:53.726259 kernel: Key type .fscrypt registered Dec 13 14:20:53.726266 kernel: Key type fscrypt-provisioning registered Dec 13 14:20:53.726272 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 14:20:53.726279 kernel: ima: Allocated hash algorithm: sha1 Dec 13 14:20:53.726285 kernel: ima: No architecture policies found Dec 13 14:20:53.726292 kernel: clk: Disabling unused clocks Dec 13 14:20:53.726300 kernel: Freeing unused kernel memory: 36416K Dec 13 14:20:53.726306 kernel: Run /init as init process Dec 13 14:20:53.726313 kernel: with arguments: Dec 13 14:20:53.726319 kernel: /init Dec 13 14:20:53.726325 kernel: with environment: Dec 13 14:20:53.726332 kernel: HOME=/ Dec 13 14:20:53.726338 kernel: TERM=linux Dec 13 14:20:53.726344 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 14:20:53.726353 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:20:53.726362 systemd[1]: Detected virtualization kvm. Dec 13 14:20:53.726370 systemd[1]: Detected architecture arm64. Dec 13 14:20:53.726376 systemd[1]: Running in initrd. Dec 13 14:20:53.726383 systemd[1]: No hostname configured, using default hostname. Dec 13 14:20:53.726390 systemd[1]: Hostname set to . Dec 13 14:20:53.726397 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:20:53.726404 systemd[1]: Queued start job for default target initrd.target. Dec 13 14:20:53.726413 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:20:53.726419 systemd[1]: Reached target cryptsetup.target. Dec 13 14:20:53.726426 systemd[1]: Reached target paths.target. Dec 13 14:20:53.726433 systemd[1]: Reached target slices.target. Dec 13 14:20:53.726440 systemd[1]: Reached target swap.target. Dec 13 14:20:53.726447 systemd[1]: Reached target timers.target. Dec 13 14:20:53.726454 systemd[1]: Listening on iscsid.socket. Dec 13 14:20:53.726462 systemd[1]: Listening on iscsiuio.socket. Dec 13 14:20:53.726469 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:20:53.726476 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:20:53.726483 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:20:53.726490 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:20:53.726497 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:20:53.726504 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:20:53.726510 systemd[1]: Reached target sockets.target. Dec 13 14:20:53.726517 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:20:53.726526 systemd[1]: Finished network-cleanup.service. Dec 13 14:20:53.726533 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 14:20:53.726540 systemd[1]: Starting systemd-journald.service... Dec 13 14:20:53.726547 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:20:53.726554 systemd[1]: Starting systemd-resolved.service... Dec 13 14:20:53.726561 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 14:20:53.726568 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:20:53.726575 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 14:20:53.726582 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:20:53.726589 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 14:20:53.726596 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 14:20:53.726603 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:20:53.726611 kernel: audit: type=1130 audit(1734099653.724:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.726621 systemd-journald[289]: Journal started Dec 13 14:20:53.726660 systemd-journald[289]: Runtime Journal (/run/log/journal/3722d1d328154d3db95d37b845416763) is 6.0M, max 48.7M, 42.6M free. Dec 13 14:20:53.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.717190 systemd-modules-load[290]: Inserted module 'overlay' Dec 13 14:20:53.729357 systemd[1]: Started systemd-journald.service. Dec 13 14:20:53.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.733378 kernel: audit: type=1130 audit(1734099653.729:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.737663 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 14:20:53.739981 systemd-modules-load[290]: Inserted module 'br_netfilter' Dec 13 14:20:53.740164 systemd-resolved[291]: Positive Trust Anchors: Dec 13 14:20:53.741391 kernel: Bridge firewalling registered Dec 13 14:20:53.740178 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:20:53.740208 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:20:53.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.744419 systemd-resolved[291]: Defaulting to hostname 'linux'. Dec 13 14:20:53.750060 kernel: audit: type=1130 audit(1734099653.747:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.745142 systemd[1]: Started systemd-resolved.service. Dec 13 14:20:53.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.753879 kernel: audit: type=1130 audit(1734099653.749:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.753893 kernel: SCSI subsystem initialized Dec 13 14:20:53.747321 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 14:20:53.750710 systemd[1]: Reached target nss-lookup.target. Dec 13 14:20:53.755192 systemd[1]: Starting dracut-cmdline.service... Dec 13 14:20:53.762607 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 14:20:53.762638 kernel: device-mapper: uevent: version 1.0.3 Dec 13 14:20:53.763960 dracut-cmdline[307]: dracut-dracut-053 Dec 13 14:20:53.764707 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 14:20:53.766084 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=5997a8cf94b1df1856dc785f0a7074604bbf4c21fdcca24a1996021471a77601 Dec 13 14:20:53.769465 systemd-modules-load[290]: Inserted module 'dm_multipath' Dec 13 14:20:53.770355 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:20:53.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.772226 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:20:53.774992 kernel: audit: type=1130 audit(1734099653.770:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.780028 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:20:53.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.783864 kernel: audit: type=1130 audit(1734099653.780:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.826870 kernel: Loading iSCSI transport class v2.0-870. Dec 13 14:20:53.840874 kernel: iscsi: registered transport (tcp) Dec 13 14:20:53.855927 kernel: iscsi: registered transport (qla4xxx) Dec 13 14:20:53.855947 kernel: QLogic iSCSI HBA Driver Dec 13 14:20:53.898644 systemd[1]: Finished dracut-cmdline.service. Dec 13 14:20:53.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.900267 systemd[1]: Starting dracut-pre-udev.service... Dec 13 14:20:53.903211 kernel: audit: type=1130 audit(1734099653.899:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:53.943880 kernel: raid6: neonx8 gen() 13806 MB/s Dec 13 14:20:53.960874 kernel: raid6: neonx8 xor() 10826 MB/s Dec 13 14:20:53.977866 kernel: raid6: neonx4 gen() 13680 MB/s Dec 13 14:20:53.994868 kernel: raid6: neonx4 xor() 11225 MB/s Dec 13 14:20:54.011866 kernel: raid6: neonx2 gen() 12953 MB/s Dec 13 14:20:54.028865 kernel: raid6: neonx2 xor() 10315 MB/s Dec 13 14:20:54.045880 kernel: raid6: neonx1 gen() 10520 MB/s Dec 13 14:20:54.062901 kernel: raid6: neonx1 xor() 8767 MB/s Dec 13 14:20:54.079869 kernel: raid6: int64x8 gen() 6243 MB/s Dec 13 14:20:54.096871 kernel: raid6: int64x8 xor() 3519 MB/s Dec 13 14:20:54.113867 kernel: raid6: int64x4 gen() 7190 MB/s Dec 13 14:20:54.130867 kernel: raid6: int64x4 xor() 3833 MB/s Dec 13 14:20:54.147867 kernel: raid6: int64x2 gen() 6054 MB/s Dec 13 14:20:54.164868 kernel: raid6: int64x2 xor() 3297 MB/s Dec 13 14:20:54.181869 kernel: raid6: int64x1 gen() 5039 MB/s Dec 13 14:20:54.199050 kernel: raid6: int64x1 xor() 2633 MB/s Dec 13 14:20:54.199062 kernel: raid6: using algorithm neonx8 gen() 13806 MB/s Dec 13 14:20:54.199071 kernel: raid6: .... xor() 10826 MB/s, rmw enabled Dec 13 14:20:54.200146 kernel: raid6: using neon recovery algorithm Dec 13 14:20:54.212869 kernel: xor: measuring software checksum speed Dec 13 14:20:54.212889 kernel: 8regs : 16573 MB/sec Dec 13 14:20:54.214024 kernel: 32regs : 18267 MB/sec Dec 13 14:20:54.214036 kernel: arm64_neon : 27459 MB/sec Dec 13 14:20:54.214044 kernel: xor: using function: arm64_neon (27459 MB/sec) Dec 13 14:20:54.270866 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Dec 13 14:20:54.281391 systemd[1]: Finished dracut-pre-udev.service. Dec 13 14:20:54.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:54.281000 audit: BPF prog-id=7 op=LOAD Dec 13 14:20:54.284000 audit: BPF prog-id=8 op=LOAD Dec 13 14:20:54.285863 kernel: audit: type=1130 audit(1734099654.281:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:54.285884 kernel: audit: type=1334 audit(1734099654.281:10): prog-id=7 op=LOAD Dec 13 14:20:54.323064 systemd[1]: Starting systemd-udevd.service... Dec 13 14:20:54.335294 systemd-udevd[491]: Using default interface naming scheme 'v252'. Dec 13 14:20:54.338552 systemd[1]: Started systemd-udevd.service. Dec 13 14:20:54.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:54.344310 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 14:20:54.358307 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation Dec 13 14:20:54.385497 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 14:20:54.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:54.387762 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:20:54.422321 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:20:54.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:54.458252 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 14:20:54.464330 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 14:20:54.464344 kernel: GPT:9289727 != 19775487 Dec 13 14:20:54.464357 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 14:20:54.464365 kernel: GPT:9289727 != 19775487 Dec 13 14:20:54.464373 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 14:20:54.464383 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:20:54.477263 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 14:20:54.480047 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (544) Dec 13 14:20:54.483300 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 14:20:54.484085 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 14:20:54.488507 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 14:20:54.492446 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:20:54.495725 systemd[1]: Starting disk-uuid.service... Dec 13 14:20:54.501080 disk-uuid[564]: Primary Header is updated. Dec 13 14:20:54.501080 disk-uuid[564]: Secondary Entries is updated. Dec 13 14:20:54.501080 disk-uuid[564]: Secondary Header is updated. Dec 13 14:20:54.503863 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:20:55.517370 disk-uuid[565]: The operation has completed successfully. Dec 13 14:20:55.518288 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 14:20:55.540720 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 14:20:55.541661 systemd[1]: Finished disk-uuid.service. Dec 13 14:20:55.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.543672 systemd[1]: Starting verity-setup.service... Dec 13 14:20:55.559862 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Dec 13 14:20:55.581126 systemd[1]: Found device dev-mapper-usr.device. Dec 13 14:20:55.583137 systemd[1]: Mounting sysusr-usr.mount... Dec 13 14:20:55.584876 systemd[1]: Finished verity-setup.service. Dec 13 14:20:55.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.631895 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 14:20:55.630909 systemd[1]: Mounted sysusr-usr.mount. Dec 13 14:20:55.631566 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 14:20:55.632302 systemd[1]: Starting ignition-setup.service... Dec 13 14:20:55.633569 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 14:20:55.641466 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:20:55.641504 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:20:55.641514 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:20:55.650217 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 14:20:55.656476 systemd[1]: Finished ignition-setup.service. Dec 13 14:20:55.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.657830 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 14:20:55.725519 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 14:20:55.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.726000 audit: BPF prog-id=9 op=LOAD Dec 13 14:20:55.727481 systemd[1]: Starting systemd-networkd.service... Dec 13 14:20:55.749216 ignition[656]: Ignition 2.14.0 Dec 13 14:20:55.749230 ignition[656]: Stage: fetch-offline Dec 13 14:20:55.749283 ignition[656]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:20:55.749293 ignition[656]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:20:55.749433 ignition[656]: parsed url from cmdline: "" Dec 13 14:20:55.749436 ignition[656]: no config URL provided Dec 13 14:20:55.749441 ignition[656]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 14:20:55.749448 ignition[656]: no config at "/usr/lib/ignition/user.ign" Dec 13 14:20:55.749466 ignition[656]: op(1): [started] loading QEMU firmware config module Dec 13 14:20:55.749470 ignition[656]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 14:20:55.755408 systemd-networkd[742]: lo: Link UP Dec 13 14:20:55.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.752902 ignition[656]: op(1): [finished] loading QEMU firmware config module Dec 13 14:20:55.755412 systemd-networkd[742]: lo: Gained carrier Dec 13 14:20:55.755775 systemd-networkd[742]: Enumeration completed Dec 13 14:20:55.755973 systemd-networkd[742]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:20:55.756190 systemd[1]: Started systemd-networkd.service. Dec 13 14:20:55.757132 systemd-networkd[742]: eth0: Link UP Dec 13 14:20:55.757135 systemd-networkd[742]: eth0: Gained carrier Dec 13 14:20:55.757195 systemd[1]: Reached target network.target. Dec 13 14:20:55.758582 systemd[1]: Starting iscsiuio.service... Dec 13 14:20:55.767527 systemd[1]: Started iscsiuio.service. Dec 13 14:20:55.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.768976 systemd[1]: Starting iscsid.service... Dec 13 14:20:55.772248 iscsid[749]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:20:55.772248 iscsid[749]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 14:20:55.772248 iscsid[749]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 14:20:55.772248 iscsid[749]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 14:20:55.772248 iscsid[749]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 14:20:55.772248 iscsid[749]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 14:20:55.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.772263 systemd-networkd[742]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:20:55.775152 systemd[1]: Started iscsid.service. Dec 13 14:20:55.779275 systemd[1]: Starting dracut-initqueue.service... Dec 13 14:20:55.789302 systemd[1]: Finished dracut-initqueue.service. Dec 13 14:20:55.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.790123 systemd[1]: Reached target remote-fs-pre.target. Dec 13 14:20:55.791404 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:20:55.792684 systemd[1]: Reached target remote-fs.target. Dec 13 14:20:55.794603 systemd[1]: Starting dracut-pre-mount.service... Dec 13 14:20:55.802485 systemd[1]: Finished dracut-pre-mount.service. Dec 13 14:20:55.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.816964 ignition[656]: parsing config with SHA512: ab199464926518d630714e0229bd21fd8f0a28f7396c244bfe14c5f82acafa885c73778f771930357950bd0d2001f11b8b507412e124033cc2cb544c04d3ffd9 Dec 13 14:20:55.823957 unknown[656]: fetched base config from "system" Dec 13 14:20:55.823972 unknown[656]: fetched user config from "qemu" Dec 13 14:20:55.824643 ignition[656]: fetch-offline: fetch-offline passed Dec 13 14:20:55.826036 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 14:20:55.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.824730 ignition[656]: Ignition finished successfully Dec 13 14:20:55.827344 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 14:20:55.828133 systemd[1]: Starting ignition-kargs.service... Dec 13 14:20:55.837111 ignition[763]: Ignition 2.14.0 Dec 13 14:20:55.837121 ignition[763]: Stage: kargs Dec 13 14:20:55.837231 ignition[763]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:20:55.837241 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:20:55.838156 ignition[763]: kargs: kargs passed Dec 13 14:20:55.838207 ignition[763]: Ignition finished successfully Dec 13 14:20:55.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.840569 systemd[1]: Finished ignition-kargs.service. Dec 13 14:20:55.842341 systemd[1]: Starting ignition-disks.service... Dec 13 14:20:55.848649 ignition[770]: Ignition 2.14.0 Dec 13 14:20:55.848659 ignition[770]: Stage: disks Dec 13 14:20:55.848752 ignition[770]: no configs at "/usr/lib/ignition/base.d" Dec 13 14:20:55.850527 systemd[1]: Finished ignition-disks.service. Dec 13 14:20:55.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.848761 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:20:55.851703 systemd[1]: Reached target initrd-root-device.target. Dec 13 14:20:55.849683 ignition[770]: disks: disks passed Dec 13 14:20:55.852652 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:20:55.849723 ignition[770]: Ignition finished successfully Dec 13 14:20:55.853831 systemd[1]: Reached target local-fs.target. Dec 13 14:20:55.854856 systemd[1]: Reached target sysinit.target. Dec 13 14:20:55.855725 systemd[1]: Reached target basic.target. Dec 13 14:20:55.857581 systemd[1]: Starting systemd-fsck-root.service... Dec 13 14:20:55.868883 systemd-fsck[778]: ROOT: clean, 621/553520 files, 56020/553472 blocks Dec 13 14:20:55.872939 systemd[1]: Finished systemd-fsck-root.service. Dec 13 14:20:55.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.874544 systemd[1]: Mounting sysroot.mount... Dec 13 14:20:55.882872 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 14:20:55.883071 systemd[1]: Mounted sysroot.mount. Dec 13 14:20:55.883644 systemd[1]: Reached target initrd-root-fs.target. Dec 13 14:20:55.887537 systemd[1]: Mounting sysroot-usr.mount... Dec 13 14:20:55.888264 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Dec 13 14:20:55.888301 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 14:20:55.888323 systemd[1]: Reached target ignition-diskful.target. Dec 13 14:20:55.890253 systemd[1]: Mounted sysroot-usr.mount. Dec 13 14:20:55.892341 systemd[1]: Starting initrd-setup-root.service... Dec 13 14:20:55.896502 initrd-setup-root[788]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 14:20:55.901409 initrd-setup-root[796]: cut: /sysroot/etc/group: No such file or directory Dec 13 14:20:55.905229 initrd-setup-root[804]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 14:20:55.909509 initrd-setup-root[812]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 14:20:55.936497 systemd[1]: Finished initrd-setup-root.service. Dec 13 14:20:55.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.937968 systemd[1]: Starting ignition-mount.service... Dec 13 14:20:55.939229 systemd[1]: Starting sysroot-boot.service... Dec 13 14:20:55.944024 bash[829]: umount: /sysroot/usr/share/oem: not mounted. Dec 13 14:20:55.952711 ignition[831]: INFO : Ignition 2.14.0 Dec 13 14:20:55.952711 ignition[831]: INFO : Stage: mount Dec 13 14:20:55.954963 ignition[831]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:20:55.954963 ignition[831]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:20:55.954963 ignition[831]: INFO : mount: mount passed Dec 13 14:20:55.954963 ignition[831]: INFO : Ignition finished successfully Dec 13 14:20:55.956000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:55.956767 systemd[1]: Finished ignition-mount.service. Dec 13 14:20:55.959265 systemd[1]: Finished sysroot-boot.service. Dec 13 14:20:56.591532 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 14:20:56.596875 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (839) Dec 13 14:20:56.599487 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Dec 13 14:20:56.599523 kernel: BTRFS info (device vda6): using free space tree Dec 13 14:20:56.599533 kernel: BTRFS info (device vda6): has skinny extents Dec 13 14:20:56.602611 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 14:20:56.604030 systemd[1]: Starting ignition-files.service... Dec 13 14:20:56.617984 ignition[859]: INFO : Ignition 2.14.0 Dec 13 14:20:56.617984 ignition[859]: INFO : Stage: files Dec 13 14:20:56.619230 ignition[859]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:20:56.619230 ignition[859]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:20:56.619230 ignition[859]: DEBUG : files: compiled without relabeling support, skipping Dec 13 14:20:56.623496 ignition[859]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 14:20:56.623496 ignition[859]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 14:20:56.625823 ignition[859]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 14:20:56.625823 ignition[859]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 14:20:56.628416 ignition[859]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 14:20:56.628002 unknown[859]: wrote ssh authorized keys file for user: core Dec 13 14:20:56.631629 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:20:56.631629 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 14:20:56.631629 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:20:56.631629 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Dec 13 14:20:56.872113 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 14:20:57.067955 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:20:57.069654 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-arm64.raw: attempt #1 Dec 13 14:20:57.375954 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 14:20:57.539931 systemd-networkd[742]: eth0: Gained IPv6LL Dec 13 14:20:57.584305 ignition[859]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-arm64.raw" Dec 13 14:20:57.584305 ignition[859]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(10): [started] processing unit "coreos-metadata.service" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(10): op(11): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(10): op(11): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(10): [finished] processing unit "coreos-metadata.service" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 14:20:57.587315 ignition[859]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:20:57.620984 ignition[859]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 14:20:57.623034 ignition[859]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 14:20:57.623034 ignition[859]: INFO : files: createResultFile: createFiles: op(15): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:20:57.623034 ignition[859]: INFO : files: createResultFile: createFiles: op(15): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 14:20:57.623034 ignition[859]: INFO : files: files passed Dec 13 14:20:57.623034 ignition[859]: INFO : Ignition finished successfully Dec 13 14:20:57.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.623210 systemd[1]: Finished ignition-files.service. Dec 13 14:20:57.625522 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 14:20:57.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.631495 initrd-setup-root-after-ignition[884]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Dec 13 14:20:57.626528 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 14:20:57.633839 initrd-setup-root-after-ignition[886]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 14:20:57.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.627180 systemd[1]: Starting ignition-quench.service... Dec 13 14:20:57.630039 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 14:20:57.630122 systemd[1]: Finished ignition-quench.service. Dec 13 14:20:57.632585 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 14:20:57.634667 systemd[1]: Reached target ignition-complete.target. Dec 13 14:20:57.636713 systemd[1]: Starting initrd-parse-etc.service... Dec 13 14:20:57.648318 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 14:20:57.648400 systemd[1]: Finished initrd-parse-etc.service. Dec 13 14:20:57.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.649713 systemd[1]: Reached target initrd-fs.target. Dec 13 14:20:57.650621 systemd[1]: Reached target initrd.target. Dec 13 14:20:57.651584 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 14:20:57.652266 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 14:20:57.661899 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 14:20:57.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.663136 systemd[1]: Starting initrd-cleanup.service... Dec 13 14:20:57.670521 systemd[1]: Stopped target nss-lookup.target. Dec 13 14:20:57.671188 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 14:20:57.672314 systemd[1]: Stopped target timers.target. Dec 13 14:20:57.673306 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 14:20:57.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.673405 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 14:20:57.674341 systemd[1]: Stopped target initrd.target. Dec 13 14:20:57.675362 systemd[1]: Stopped target basic.target. Dec 13 14:20:57.676407 systemd[1]: Stopped target ignition-complete.target. Dec 13 14:20:57.677401 systemd[1]: Stopped target ignition-diskful.target. Dec 13 14:20:57.678384 systemd[1]: Stopped target initrd-root-device.target. Dec 13 14:20:57.679565 systemd[1]: Stopped target remote-fs.target. Dec 13 14:20:57.680562 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 14:20:57.681632 systemd[1]: Stopped target sysinit.target. Dec 13 14:20:57.682573 systemd[1]: Stopped target local-fs.target. Dec 13 14:20:57.683532 systemd[1]: Stopped target local-fs-pre.target. Dec 13 14:20:57.684479 systemd[1]: Stopped target swap.target. Dec 13 14:20:57.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.685409 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 14:20:57.685507 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 14:20:57.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.686534 systemd[1]: Stopped target cryptsetup.target. Dec 13 14:20:57.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.687413 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 14:20:57.687500 systemd[1]: Stopped dracut-initqueue.service. Dec 13 14:20:57.688592 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 14:20:57.688682 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 14:20:57.689616 systemd[1]: Stopped target paths.target. Dec 13 14:20:57.690464 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 14:20:57.693877 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 14:20:57.694729 systemd[1]: Stopped target slices.target. Dec 13 14:20:57.695722 systemd[1]: Stopped target sockets.target. Dec 13 14:20:57.696673 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 14:20:57.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.696772 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 14:20:57.698000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.697745 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 14:20:57.697831 systemd[1]: Stopped ignition-files.service. Dec 13 14:20:57.703324 iscsid[749]: iscsid shutting down. Dec 13 14:20:57.699621 systemd[1]: Stopping ignition-mount.service... Dec 13 14:20:57.703000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.700832 systemd[1]: Stopping iscsid.service... Dec 13 14:20:57.703675 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 14:20:57.707675 ignition[899]: INFO : Ignition 2.14.0 Dec 13 14:20:57.707675 ignition[899]: INFO : Stage: umount Dec 13 14:20:57.707675 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 14:20:57.707675 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 14:20:57.707675 ignition[899]: INFO : umount: umount passed Dec 13 14:20:57.707675 ignition[899]: INFO : Ignition finished successfully Dec 13 14:20:57.708000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.703795 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 14:20:57.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.705395 systemd[1]: Stopping sysroot-boot.service... Dec 13 14:20:57.708186 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 14:20:57.708324 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 14:20:57.709479 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 14:20:57.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.709576 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 14:20:57.718000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.711956 systemd[1]: iscsid.service: Deactivated successfully. Dec 13 14:20:57.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.712042 systemd[1]: Stopped iscsid.service. Dec 13 14:20:57.713418 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 14:20:57.713491 systemd[1]: Stopped ignition-mount.service. Dec 13 14:20:57.715074 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 14:20:57.715609 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 14:20:57.715678 systemd[1]: Closed iscsid.socket. Dec 13 14:20:57.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.716959 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 14:20:57.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.717005 systemd[1]: Stopped ignition-disks.service. Dec 13 14:20:57.718106 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 14:20:57.718149 systemd[1]: Stopped ignition-kargs.service. Dec 13 14:20:57.719793 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 14:20:57.719833 systemd[1]: Stopped ignition-setup.service. Dec 13 14:20:57.720901 systemd[1]: Stopping iscsiuio.service... Dec 13 14:20:57.724300 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 14:20:57.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.724411 systemd[1]: Finished initrd-cleanup.service. Dec 13 14:20:57.726042 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 14:20:57.726130 systemd[1]: Stopped iscsiuio.service. Dec 13 14:20:57.727479 systemd[1]: Stopped target network.target. Dec 13 14:20:57.738000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.728488 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 14:20:57.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.728519 systemd[1]: Closed iscsiuio.socket. Dec 13 14:20:57.739000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.730571 systemd[1]: Stopping systemd-networkd.service... Dec 13 14:20:57.731577 systemd[1]: Stopping systemd-resolved.service... Dec 13 14:20:57.732213 systemd-networkd[742]: eth0: DHCPv6 lease lost Dec 13 14:20:57.733446 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 14:20:57.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.745000 audit: BPF prog-id=9 op=UNLOAD Dec 13 14:20:57.733532 systemd[1]: Stopped systemd-networkd.service. Dec 13 14:20:57.734646 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 14:20:57.734677 systemd[1]: Closed systemd-networkd.socket. Dec 13 14:20:57.736355 systemd[1]: Stopping network-cleanup.service... Dec 13 14:20:57.737322 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 14:20:57.737379 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 14:20:57.750000 audit: BPF prog-id=6 op=UNLOAD Dec 13 14:20:57.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.738482 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 14:20:57.738520 systemd[1]: Stopped systemd-sysctl.service. Dec 13 14:20:57.740184 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 14:20:57.740225 systemd[1]: Stopped systemd-modules-load.service. Dec 13 14:20:57.740942 systemd[1]: Stopping systemd-udevd.service... Dec 13 14:20:57.744589 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 14:20:57.745034 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 14:20:57.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.745114 systemd[1]: Stopped systemd-resolved.service. Dec 13 14:20:57.749286 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 14:20:57.749368 systemd[1]: Stopped network-cleanup.service. Dec 13 14:20:57.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.754103 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 14:20:57.754195 systemd[1]: Stopped sysroot-boot.service. Dec 13 14:20:57.755482 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 14:20:57.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.755580 systemd[1]: Stopped systemd-udevd.service. Dec 13 14:20:57.761000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.757923 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 14:20:57.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.757958 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 14:20:57.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.758658 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 14:20:57.758684 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 14:20:57.759824 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 14:20:57.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.759905 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 14:20:57.760807 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 14:20:57.760839 systemd[1]: Stopped dracut-cmdline.service. Dec 13 14:20:57.762028 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 14:20:57.762061 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 14:20:57.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:57.763064 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 14:20:57.763097 systemd[1]: Stopped initrd-setup-root.service. Dec 13 14:20:57.764895 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 14:20:57.766069 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 14:20:57.766115 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 14:20:57.769571 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 14:20:57.769648 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 14:20:57.771038 systemd[1]: Reached target initrd-switch-root.target. Dec 13 14:20:57.772642 systemd[1]: Starting initrd-switch-root.service... Dec 13 14:20:57.777000 audit: BPF prog-id=5 op=UNLOAD Dec 13 14:20:57.777000 audit: BPF prog-id=4 op=UNLOAD Dec 13 14:20:57.777000 audit: BPF prog-id=3 op=UNLOAD Dec 13 14:20:57.777474 systemd[1]: Switching root. Dec 13 14:20:57.779000 audit: BPF prog-id=8 op=UNLOAD Dec 13 14:20:57.779000 audit: BPF prog-id=7 op=UNLOAD Dec 13 14:20:57.795236 systemd-journald[289]: Journal stopped Dec 13 14:20:59.836611 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). Dec 13 14:20:59.836668 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 14:20:59.836680 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 14:20:59.836691 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 14:20:59.836701 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 14:20:59.836711 kernel: SELinux: policy capability open_perms=1 Dec 13 14:20:59.836720 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 14:20:59.836731 kernel: SELinux: policy capability always_check_network=0 Dec 13 14:20:59.836744 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 14:20:59.836755 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 14:20:59.836765 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 14:20:59.836774 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 14:20:59.836788 systemd[1]: Successfully loaded SELinux policy in 33.929ms. Dec 13 14:20:59.836804 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.652ms. Dec 13 14:20:59.836816 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 14:20:59.836828 systemd[1]: Detected virtualization kvm. Dec 13 14:20:59.836839 systemd[1]: Detected architecture arm64. Dec 13 14:20:59.836866 systemd[1]: Detected first boot. Dec 13 14:20:59.836877 systemd[1]: Initializing machine ID from VM UUID. Dec 13 14:20:59.836887 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 14:20:59.836897 kernel: kauditd_printk_skb: 70 callbacks suppressed Dec 13 14:20:59.836912 kernel: audit: type=1400 audit(1734099658.025:81): avc: denied { associate } for pid=950 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Dec 13 14:20:59.836925 kernel: audit: type=1300 audit(1734099658.025:81): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c767c a1=40000caae0 a2=40000d0a00 a3=32 items=0 ppid=932 pid=950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:59.836936 kernel: audit: type=1327 audit(1734099658.025:81): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:20:59.836947 kernel: audit: type=1400 audit(1734099658.027:82): avc: denied { associate } for pid=950 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Dec 13 14:20:59.836958 kernel: audit: type=1300 audit(1734099658.027:82): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c7755 a2=1ed a3=0 items=2 ppid=932 pid=950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:59.836968 kernel: audit: type=1307 audit(1734099658.027:82): cwd="/" Dec 13 14:20:59.836978 kernel: audit: type=1302 audit(1734099658.027:82): item=0 name=(null) inode=2 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:20:59.836990 kernel: audit: type=1302 audit(1734099658.027:82): item=1 name=(null) inode=3 dev=00:2a mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:20:59.837001 kernel: audit: type=1327 audit(1734099658.027:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Dec 13 14:20:59.837011 systemd[1]: Populated /etc with preset unit settings. Dec 13 14:20:59.837023 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:20:59.837034 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:20:59.837045 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:20:59.837056 systemd[1]: Queued start job for default target multi-user.target. Dec 13 14:20:59.837068 systemd[1]: Unnecessary job was removed for dev-vda6.device. Dec 13 14:20:59.837079 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 14:20:59.837089 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 14:20:59.837099 systemd[1]: Created slice system-getty.slice. Dec 13 14:20:59.837113 systemd[1]: Created slice system-modprobe.slice. Dec 13 14:20:59.837123 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 14:20:59.837134 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 14:20:59.837146 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 14:20:59.837164 systemd[1]: Created slice user.slice. Dec 13 14:20:59.837176 systemd[1]: Started systemd-ask-password-console.path. Dec 13 14:20:59.837187 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 14:20:59.837198 systemd[1]: Set up automount boot.automount. Dec 13 14:20:59.837209 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 14:20:59.837220 systemd[1]: Reached target integritysetup.target. Dec 13 14:20:59.837230 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 14:20:59.837241 systemd[1]: Reached target remote-fs.target. Dec 13 14:20:59.837251 systemd[1]: Reached target slices.target. Dec 13 14:20:59.837263 systemd[1]: Reached target swap.target. Dec 13 14:20:59.837274 systemd[1]: Reached target torcx.target. Dec 13 14:20:59.837284 systemd[1]: Reached target veritysetup.target. Dec 13 14:20:59.837295 systemd[1]: Listening on systemd-coredump.socket. Dec 13 14:20:59.837305 systemd[1]: Listening on systemd-initctl.socket. Dec 13 14:20:59.837316 kernel: audit: type=1400 audit(1734099659.728:83): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 14:20:59.837329 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 14:20:59.837340 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 14:20:59.837350 systemd[1]: Listening on systemd-journald.socket. Dec 13 14:20:59.837361 systemd[1]: Listening on systemd-networkd.socket. Dec 13 14:20:59.837372 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 14:20:59.837383 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 14:20:59.837393 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 14:20:59.837403 systemd[1]: Mounting dev-hugepages.mount... Dec 13 14:20:59.837414 systemd[1]: Mounting dev-mqueue.mount... Dec 13 14:20:59.837424 systemd[1]: Mounting media.mount... Dec 13 14:20:59.837434 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 14:20:59.837445 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 14:20:59.837456 systemd[1]: Mounting tmp.mount... Dec 13 14:20:59.837468 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 14:20:59.837479 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:20:59.837489 systemd[1]: Starting kmod-static-nodes.service... Dec 13 14:20:59.837499 systemd[1]: Starting modprobe@configfs.service... Dec 13 14:20:59.837510 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:20:59.837520 systemd[1]: Starting modprobe@drm.service... Dec 13 14:20:59.837532 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:20:59.837542 systemd[1]: Starting modprobe@fuse.service... Dec 13 14:20:59.837553 systemd[1]: Starting modprobe@loop.service... Dec 13 14:20:59.837565 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 14:20:59.837576 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 14:20:59.837591 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 14:20:59.837601 systemd[1]: Starting systemd-journald.service... Dec 13 14:20:59.837611 kernel: loop: module loaded Dec 13 14:20:59.837621 systemd[1]: Starting systemd-modules-load.service... Dec 13 14:20:59.837632 systemd[1]: Starting systemd-network-generator.service... Dec 13 14:20:59.837642 kernel: fuse: init (API version 7.34) Dec 13 14:20:59.837652 systemd[1]: Starting systemd-remount-fs.service... Dec 13 14:20:59.837664 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 14:20:59.837675 systemd[1]: Mounted dev-hugepages.mount. Dec 13 14:20:59.837685 systemd[1]: Mounted dev-mqueue.mount. Dec 13 14:20:59.837696 systemd[1]: Mounted media.mount. Dec 13 14:20:59.837706 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 14:20:59.837717 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 14:20:59.837728 systemd[1]: Mounted tmp.mount. Dec 13 14:20:59.837738 systemd[1]: Finished kmod-static-nodes.service. Dec 13 14:20:59.837748 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 14:20:59.837760 systemd[1]: Finished modprobe@configfs.service. Dec 13 14:20:59.837770 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:20:59.837780 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:20:59.837791 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:20:59.837801 systemd[1]: Finished modprobe@drm.service. Dec 13 14:20:59.837811 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:20:59.837821 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:20:59.837832 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 14:20:59.837851 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 14:20:59.837865 systemd[1]: Finished modprobe@fuse.service. Dec 13 14:20:59.837877 systemd-journald[1032]: Journal started Dec 13 14:20:59.837919 systemd-journald[1032]: Runtime Journal (/run/log/journal/3722d1d328154d3db95d37b845416763) is 6.0M, max 48.7M, 42.6M free. Dec 13 14:20:59.731000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 14:20:59.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.835000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 14:20:59.835000 audit[1032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=fffff115a4b0 a2=4000 a3=1 items=0 ppid=1 pid=1032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:20:59.835000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 14:20:59.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.840071 systemd[1]: Started systemd-journald.service. Dec 13 14:20:59.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.841107 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:20:59.841422 systemd[1]: Finished modprobe@loop.service. Dec 13 14:20:59.841000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.842570 systemd[1]: Finished systemd-modules-load.service. Dec 13 14:20:59.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.843626 systemd[1]: Finished systemd-network-generator.service. Dec 13 14:20:59.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.844735 systemd[1]: Finished systemd-remount-fs.service. Dec 13 14:20:59.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.845806 systemd[1]: Reached target network-pre.target. Dec 13 14:20:59.847531 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 14:20:59.849331 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 14:20:59.849906 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 14:20:59.851287 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 14:20:59.853018 systemd[1]: Starting systemd-journal-flush.service... Dec 13 14:20:59.853715 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:20:59.854873 systemd[1]: Starting systemd-random-seed.service... Dec 13 14:20:59.855547 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:20:59.856575 systemd[1]: Starting systemd-sysctl.service... Dec 13 14:20:59.859180 systemd[1]: Starting systemd-sysusers.service... Dec 13 14:20:59.861560 systemd-journald[1032]: Time spent on flushing to /var/log/journal/3722d1d328154d3db95d37b845416763 is 11.402ms for 931 entries. Dec 13 14:20:59.861560 systemd-journald[1032]: System Journal (/var/log/journal/3722d1d328154d3db95d37b845416763) is 8.0M, max 195.6M, 187.6M free. Dec 13 14:20:59.885744 systemd-journald[1032]: Received client request to flush runtime journal. Dec 13 14:20:59.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.861586 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 14:20:59.864622 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 14:20:59.865676 systemd[1]: Finished systemd-random-seed.service. Dec 13 14:20:59.866604 systemd[1]: Reached target first-boot-complete.target. Dec 13 14:20:59.878626 systemd[1]: Finished systemd-sysusers.service. Dec 13 14:20:59.880494 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 14:20:59.881627 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 14:20:59.883741 systemd[1]: Starting systemd-udev-settle.service... Dec 13 14:20:59.884775 systemd[1]: Finished systemd-sysctl.service. Dec 13 14:20:59.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:20:59.886814 systemd[1]: Finished systemd-journal-flush.service. Dec 13 14:20:59.890916 udevadm[1084]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 14:20:59.900980 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 14:20:59.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.228981 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 14:21:00.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.230953 systemd[1]: Starting systemd-udevd.service... Dec 13 14:21:00.254832 systemd-udevd[1089]: Using default interface naming scheme 'v252'. Dec 13 14:21:00.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.266740 systemd[1]: Started systemd-udevd.service. Dec 13 14:21:00.269003 systemd[1]: Starting systemd-networkd.service... Dec 13 14:21:00.279987 systemd[1]: Starting systemd-userdbd.service... Dec 13 14:21:00.287547 systemd[1]: Found device dev-ttyAMA0.device. Dec 13 14:21:00.332302 systemd[1]: Started systemd-userdbd.service. Dec 13 14:21:00.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.339832 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 14:21:00.364353 systemd[1]: Finished systemd-udev-settle.service. Dec 13 14:21:00.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.366190 systemd[1]: Starting lvm2-activation-early.service... Dec 13 14:21:00.382786 lvm[1122]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:21:00.386641 systemd-networkd[1096]: lo: Link UP Dec 13 14:21:00.386654 systemd-networkd[1096]: lo: Gained carrier Dec 13 14:21:00.387019 systemd-networkd[1096]: Enumeration completed Dec 13 14:21:00.387124 systemd-networkd[1096]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 14:21:00.387147 systemd[1]: Started systemd-networkd.service. Dec 13 14:21:00.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.390327 systemd-networkd[1096]: eth0: Link UP Dec 13 14:21:00.390338 systemd-networkd[1096]: eth0: Gained carrier Dec 13 14:21:00.403948 systemd-networkd[1096]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 14:21:00.408679 systemd[1]: Finished lvm2-activation-early.service. Dec 13 14:21:00.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.409476 systemd[1]: Reached target cryptsetup.target. Dec 13 14:21:00.411245 systemd[1]: Starting lvm2-activation.service... Dec 13 14:21:00.414708 lvm[1125]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 14:21:00.439765 systemd[1]: Finished lvm2-activation.service. Dec 13 14:21:00.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.440526 systemd[1]: Reached target local-fs-pre.target. Dec 13 14:21:00.441186 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 14:21:00.441217 systemd[1]: Reached target local-fs.target. Dec 13 14:21:00.441781 systemd[1]: Reached target machines.target. Dec 13 14:21:00.443537 systemd[1]: Starting ldconfig.service... Dec 13 14:21:00.444422 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:21:00.444476 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:21:00.445557 systemd[1]: Starting systemd-boot-update.service... Dec 13 14:21:00.447331 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 14:21:00.449535 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 14:21:00.451571 systemd[1]: Starting systemd-sysext.service... Dec 13 14:21:00.453158 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1128 (bootctl) Dec 13 14:21:00.455129 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 14:21:00.459753 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 14:21:00.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.464668 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 14:21:00.468630 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 14:21:00.469018 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 14:21:00.525882 kernel: loop0: detected capacity change from 0 to 194512 Dec 13 14:21:00.530250 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 14:21:00.530931 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 14:21:00.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.540903 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 14:21:00.548198 systemd-fsck[1140]: fsck.fat 4.2 (2021-01-31) Dec 13 14:21:00.548198 systemd-fsck[1140]: /dev/vda1: 236 files, 117175/258078 clusters Dec 13 14:21:00.551736 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 14:21:00.551000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.554893 systemd[1]: Mounting boot.mount... Dec 13 14:21:00.560866 kernel: loop1: detected capacity change from 0 to 194512 Dec 13 14:21:00.562453 systemd[1]: Mounted boot.mount. Dec 13 14:21:00.568179 (sd-sysext)[1147]: Using extensions 'kubernetes'. Dec 13 14:21:00.568497 (sd-sysext)[1147]: Merged extensions into '/usr'. Dec 13 14:21:00.569108 systemd[1]: Finished systemd-boot-update.service. Dec 13 14:21:00.569000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.584381 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:21:00.585608 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:21:00.587436 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:21:00.589140 systemd[1]: Starting modprobe@loop.service... Dec 13 14:21:00.589785 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:21:00.589926 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:21:00.590703 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:21:00.590918 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:21:00.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.592176 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:21:00.592312 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:21:00.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.593591 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:21:00.593740 systemd[1]: Finished modprobe@loop.service. Dec 13 14:21:00.595083 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:21:00.595186 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:21:00.627871 ldconfig[1127]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 14:21:00.631372 systemd[1]: Finished ldconfig.service. Dec 13 14:21:00.631000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.808418 systemd[1]: Mounting usr-share-oem.mount... Dec 13 14:21:00.813561 systemd[1]: Mounted usr-share-oem.mount. Dec 13 14:21:00.815247 systemd[1]: Finished systemd-sysext.service. Dec 13 14:21:00.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.817104 systemd[1]: Starting ensure-sysext.service... Dec 13 14:21:00.818693 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 14:21:00.823033 systemd[1]: Reloading. Dec 13 14:21:00.827380 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 14:21:00.828090 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 14:21:00.829421 systemd-tmpfiles[1164]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 14:21:00.860791 /usr/lib/systemd/system-generators/torcx-generator[1184]: time="2024-12-13T14:21:00Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:21:00.860822 /usr/lib/systemd/system-generators/torcx-generator[1184]: time="2024-12-13T14:21:00Z" level=info msg="torcx already run" Dec 13 14:21:00.923094 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:21:00.923113 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:21:00.938411 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:21:00.983345 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 14:21:00.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:00.987017 systemd[1]: Starting audit-rules.service... Dec 13 14:21:00.988718 systemd[1]: Starting clean-ca-certificates.service... Dec 13 14:21:00.990590 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 14:21:00.993126 systemd[1]: Starting systemd-resolved.service... Dec 13 14:21:00.995337 systemd[1]: Starting systemd-timesyncd.service... Dec 13 14:21:00.997331 systemd[1]: Starting systemd-update-utmp.service... Dec 13 14:21:00.998758 systemd[1]: Finished clean-ca-certificates.service. Dec 13 14:21:00.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.002349 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:21:01.005706 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:21:01.007121 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:21:01.007000 audit[1237]: SYSTEM_BOOT pid=1237 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.010999 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:21:01.012872 systemd[1]: Starting modprobe@loop.service... Dec 13 14:21:01.013478 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:21:01.013592 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:21:01.013682 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:21:01.014462 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:21:01.014606 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:21:01.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.015812 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:21:01.016021 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:21:01.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.017166 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 14:21:01.018255 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:21:01.018427 systemd[1]: Finished modprobe@loop.service. Dec 13 14:21:01.022650 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:21:01.023834 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:21:01.025607 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:21:01.027357 systemd[1]: Starting modprobe@loop.service... Dec 13 14:21:01.027953 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:21:01.028068 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:21:01.029412 systemd[1]: Starting systemd-update-done.service... Dec 13 14:21:01.030191 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:21:01.031370 systemd[1]: Finished systemd-update-utmp.service. Dec 13 14:21:01.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.032395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:21:01.032529 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:21:01.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.033554 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:21:01.033682 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:21:01.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.034692 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:21:01.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.034880 systemd[1]: Finished modprobe@loop.service. Dec 13 14:21:01.038521 systemd[1]: Finished systemd-update-done.service. Dec 13 14:21:01.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.039812 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 14:21:01.040956 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 14:21:01.044399 systemd[1]: Starting modprobe@drm.service... Dec 13 14:21:01.046146 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 14:21:01.047959 systemd[1]: Starting modprobe@loop.service... Dec 13 14:21:01.048620 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 14:21:01.048765 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:21:01.050212 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 14:21:01.050970 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 14:21:01.054262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 14:21:01.054411 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 14:21:01.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.055427 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 14:21:01.055566 systemd[1]: Finished modprobe@drm.service. Dec 13 14:21:01.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.056526 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 14:21:01.056664 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 14:21:01.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.057000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.057661 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 14:21:01.057806 systemd[1]: Finished modprobe@loop.service. Dec 13 14:21:01.059143 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 14:21:01.059282 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 14:21:01.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.063465 systemd[1]: Started systemd-timesyncd.service. Dec 13 14:21:01.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:01.064799 systemd[1]: Finished ensure-sysext.service. Dec 13 14:21:01.066247 systemd[1]: Reached target time-set.target. Dec 13 14:21:01.069328 systemd-timesyncd[1236]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 14:21:01.069390 systemd-timesyncd[1236]: Initial clock synchronization to Fri 2024-12-13 14:21:00.768611 UTC. Dec 13 14:21:01.079000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 14:21:01.079000 audit[1282]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffffbe23530 a2=420 a3=0 items=0 ppid=1230 pid=1282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:01.079000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 14:21:01.080924 augenrules[1282]: No rules Dec 13 14:21:01.081444 systemd[1]: Finished audit-rules.service. Dec 13 14:21:01.088867 systemd-resolved[1235]: Positive Trust Anchors: Dec 13 14:21:01.088876 systemd-resolved[1235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 14:21:01.088904 systemd-resolved[1235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 14:21:01.096913 systemd-resolved[1235]: Defaulting to hostname 'linux'. Dec 13 14:21:01.098262 systemd[1]: Started systemd-resolved.service. Dec 13 14:21:01.098923 systemd[1]: Reached target network.target. Dec 13 14:21:01.099515 systemd[1]: Reached target nss-lookup.target. Dec 13 14:21:01.100180 systemd[1]: Reached target sysinit.target. Dec 13 14:21:01.100837 systemd[1]: Started motdgen.path. Dec 13 14:21:01.101399 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 14:21:01.102384 systemd[1]: Started logrotate.timer. Dec 13 14:21:01.103036 systemd[1]: Started mdadm.timer. Dec 13 14:21:01.103555 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 14:21:01.104192 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 14:21:01.104217 systemd[1]: Reached target paths.target. Dec 13 14:21:01.104741 systemd[1]: Reached target timers.target. Dec 13 14:21:01.105596 systemd[1]: Listening on dbus.socket. Dec 13 14:21:01.107406 systemd[1]: Starting docker.socket... Dec 13 14:21:01.108991 systemd[1]: Listening on sshd.socket. Dec 13 14:21:01.109679 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:21:01.110005 systemd[1]: Listening on docker.socket. Dec 13 14:21:01.110595 systemd[1]: Reached target sockets.target. Dec 13 14:21:01.111187 systemd[1]: Reached target basic.target. Dec 13 14:21:01.111886 systemd[1]: System is tainted: cgroupsv1 Dec 13 14:21:01.111931 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:21:01.111950 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 14:21:01.112978 systemd[1]: Starting containerd.service... Dec 13 14:21:01.114647 systemd[1]: Starting dbus.service... Dec 13 14:21:01.116335 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 14:21:01.118282 systemd[1]: Starting extend-filesystems.service... Dec 13 14:21:01.118987 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 14:21:01.120396 systemd[1]: Starting motdgen.service... Dec 13 14:21:01.122324 systemd[1]: Starting prepare-helm.service... Dec 13 14:21:01.124114 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 14:21:01.126364 systemd[1]: Starting sshd-keygen.service... Dec 13 14:21:01.129786 systemd[1]: Starting systemd-logind.service... Dec 13 14:21:01.130399 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 14:21:01.130479 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 14:21:01.131507 jq[1293]: false Dec 13 14:21:01.131652 systemd[1]: Starting update-engine.service... Dec 13 14:21:01.133691 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 14:21:01.137748 jq[1309]: true Dec 13 14:21:01.136057 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 14:21:01.136305 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 14:21:01.137586 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 14:21:01.137873 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 14:21:01.152355 jq[1319]: true Dec 13 14:21:01.156159 extend-filesystems[1294]: Found loop1 Dec 13 14:21:01.156159 extend-filesystems[1294]: Found vda Dec 13 14:21:01.156159 extend-filesystems[1294]: Found vda1 Dec 13 14:21:01.156159 extend-filesystems[1294]: Found vda2 Dec 13 14:21:01.156159 extend-filesystems[1294]: Found vda3 Dec 13 14:21:01.156159 extend-filesystems[1294]: Found usr Dec 13 14:21:01.156159 extend-filesystems[1294]: Found vda4 Dec 13 14:21:01.156159 extend-filesystems[1294]: Found vda6 Dec 13 14:21:01.156159 extend-filesystems[1294]: Found vda7 Dec 13 14:21:01.156159 extend-filesystems[1294]: Found vda9 Dec 13 14:21:01.156159 extend-filesystems[1294]: Checking size of /dev/vda9 Dec 13 14:21:01.174251 tar[1314]: linux-arm64/helm Dec 13 14:21:01.161800 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 14:21:01.162071 systemd[1]: Finished motdgen.service. Dec 13 14:21:01.182087 dbus-daemon[1292]: [system] SELinux support is enabled Dec 13 14:21:01.190260 extend-filesystems[1294]: Resized partition /dev/vda9 Dec 13 14:21:01.182371 systemd[1]: Started dbus.service. Dec 13 14:21:01.184773 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 14:21:01.184790 systemd[1]: Reached target system-config.target. Dec 13 14:21:01.185786 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 14:21:01.185802 systemd[1]: Reached target user-config.target. Dec 13 14:21:01.198918 extend-filesystems[1347]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 14:21:01.211874 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 14:21:01.232904 update_engine[1306]: I1213 14:21:01.232685 1306 main.cc:92] Flatcar Update Engine starting Dec 13 14:21:01.232886 systemd-logind[1303]: Watching system buttons on /dev/input/event0 (Power Button) Dec 13 14:21:01.233488 systemd-logind[1303]: New seat seat0. Dec 13 14:21:01.235704 systemd[1]: Started systemd-logind.service. Dec 13 14:21:01.237234 systemd[1]: Started update-engine.service. Dec 13 14:21:01.241797 update_engine[1306]: I1213 14:21:01.237282 1306 update_check_scheduler.cc:74] Next update check in 5m23s Dec 13 14:21:01.239989 systemd[1]: Started locksmithd.service. Dec 13 14:21:01.246291 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 14:21:01.259804 extend-filesystems[1347]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 14:21:01.259804 extend-filesystems[1347]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 14:21:01.259804 extend-filesystems[1347]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 14:21:01.262520 bash[1352]: Updated "/home/core/.ssh/authorized_keys" Dec 13 14:21:01.264753 env[1320]: time="2024-12-13T14:21:01.264708040Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 14:21:01.264988 extend-filesystems[1294]: Resized filesystem in /dev/vda9 Dec 13 14:21:01.266235 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 14:21:01.266482 systemd[1]: Finished extend-filesystems.service. Dec 13 14:21:01.267681 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 14:21:01.290223 env[1320]: time="2024-12-13T14:21:01.289958880Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 14:21:01.290337 env[1320]: time="2024-12-13T14:21:01.290315840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:21:01.293859 env[1320]: time="2024-12-13T14:21:01.292981280Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:21:01.293859 env[1320]: time="2024-12-13T14:21:01.293011600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:21:01.293859 env[1320]: time="2024-12-13T14:21:01.293259720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:21:01.293859 env[1320]: time="2024-12-13T14:21:01.293276640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 14:21:01.293859 env[1320]: time="2024-12-13T14:21:01.293289720Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 14:21:01.293859 env[1320]: time="2024-12-13T14:21:01.293299040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 14:21:01.293859 env[1320]: time="2024-12-13T14:21:01.293381160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:21:01.293859 env[1320]: time="2024-12-13T14:21:01.293658160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 14:21:01.293859 env[1320]: time="2024-12-13T14:21:01.293808600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 14:21:01.293859 env[1320]: time="2024-12-13T14:21:01.293824400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 14:21:01.294118 env[1320]: time="2024-12-13T14:21:01.293898160Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 14:21:01.294118 env[1320]: time="2024-12-13T14:21:01.293911480Z" level=info msg="metadata content store policy set" policy=shared Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.296990000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.297022800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.297037120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.297067920Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.297084280Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.297099000Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.297111440Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.297450280Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.297470800Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.297483920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.297496240Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.297508760Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.297620160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 14:21:01.297908 env[1320]: time="2024-12-13T14:21:01.297689520Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 14:21:01.298212 env[1320]: time="2024-12-13T14:21:01.297992320Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 14:21:01.298212 env[1320]: time="2024-12-13T14:21:01.298019000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 14:21:01.298212 env[1320]: time="2024-12-13T14:21:01.298032240Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 14:21:01.298212 env[1320]: time="2024-12-13T14:21:01.298137560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 14:21:01.298212 env[1320]: time="2024-12-13T14:21:01.298158320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 14:21:01.298212 env[1320]: time="2024-12-13T14:21:01.298173080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 14:21:01.298212 env[1320]: time="2024-12-13T14:21:01.298184440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 14:21:01.298212 env[1320]: time="2024-12-13T14:21:01.298196040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 14:21:01.298212 env[1320]: time="2024-12-13T14:21:01.298210800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 14:21:01.298375 env[1320]: time="2024-12-13T14:21:01.298222240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 14:21:01.298375 env[1320]: time="2024-12-13T14:21:01.298234000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 14:21:01.298375 env[1320]: time="2024-12-13T14:21:01.298246560Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 14:21:01.298375 env[1320]: time="2024-12-13T14:21:01.298365800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 14:21:01.298458 env[1320]: time="2024-12-13T14:21:01.298381920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 14:21:01.298458 env[1320]: time="2024-12-13T14:21:01.298396320Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 14:21:01.298458 env[1320]: time="2024-12-13T14:21:01.298408360Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 14:21:01.298458 env[1320]: time="2024-12-13T14:21:01.298422120Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 14:21:01.298458 env[1320]: time="2024-12-13T14:21:01.298432640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 14:21:01.298458 env[1320]: time="2024-12-13T14:21:01.298449200Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 14:21:01.298574 env[1320]: time="2024-12-13T14:21:01.298481680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 14:21:01.298717 env[1320]: time="2024-12-13T14:21:01.298660520Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 14:21:01.299314 env[1320]: time="2024-12-13T14:21:01.298721560Z" level=info msg="Connect containerd service" Dec 13 14:21:01.299314 env[1320]: time="2024-12-13T14:21:01.298752600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 14:21:01.299439 env[1320]: time="2024-12-13T14:21:01.299409680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:21:01.299779 env[1320]: time="2024-12-13T14:21:01.299754000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 14:21:01.299828 env[1320]: time="2024-12-13T14:21:01.299792480Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 14:21:01.299876 env[1320]: time="2024-12-13T14:21:01.299810560Z" level=info msg="Start subscribing containerd event" Dec 13 14:21:01.299876 env[1320]: time="2024-12-13T14:21:01.299870760Z" level=info msg="Start recovering state" Dec 13 14:21:01.299934 systemd[1]: Started containerd.service. Dec 13 14:21:01.300057 env[1320]: time="2024-12-13T14:21:01.299941400Z" level=info msg="Start event monitor" Dec 13 14:21:01.300057 env[1320]: time="2024-12-13T14:21:01.299961560Z" level=info msg="Start snapshots syncer" Dec 13 14:21:01.300057 env[1320]: time="2024-12-13T14:21:01.299971920Z" level=info msg="Start cni network conf syncer for default" Dec 13 14:21:01.300057 env[1320]: time="2024-12-13T14:21:01.299979040Z" level=info msg="Start streaming server" Dec 13 14:21:01.301213 env[1320]: time="2024-12-13T14:21:01.301105200Z" level=info msg="containerd successfully booted in 0.044038s" Dec 13 14:21:01.309994 locksmithd[1354]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 14:21:01.565278 tar[1314]: linux-arm64/LICENSE Dec 13 14:21:01.565371 tar[1314]: linux-arm64/README.md Dec 13 14:21:01.569657 systemd[1]: Finished prepare-helm.service. Dec 13 14:21:02.340028 systemd-networkd[1096]: eth0: Gained IPv6LL Dec 13 14:21:02.341943 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 14:21:02.342895 systemd[1]: Reached target network-online.target. Dec 13 14:21:02.345156 systemd[1]: Starting kubelet.service... Dec 13 14:21:02.806622 systemd[1]: Started kubelet.service. Dec 13 14:21:02.832830 sshd_keygen[1322]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 14:21:02.849852 systemd[1]: Finished sshd-keygen.service. Dec 13 14:21:02.851789 systemd[1]: Starting issuegen.service... Dec 13 14:21:02.856206 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 14:21:02.856396 systemd[1]: Finished issuegen.service. Dec 13 14:21:02.858271 systemd[1]: Starting systemd-user-sessions.service... Dec 13 14:21:02.864323 systemd[1]: Finished systemd-user-sessions.service. Dec 13 14:21:02.866337 systemd[1]: Started getty@tty1.service. Dec 13 14:21:02.868129 systemd[1]: Started serial-getty@ttyAMA0.service. Dec 13 14:21:02.869121 systemd[1]: Reached target getty.target. Dec 13 14:21:02.869784 systemd[1]: Reached target multi-user.target. Dec 13 14:21:02.871565 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 14:21:02.877927 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 14:21:02.878116 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 14:21:02.879328 systemd[1]: Startup finished in 4.878s (kernel) + 5.035s (userspace) = 9.914s. Dec 13 14:21:03.273675 kubelet[1379]: E1213 14:21:03.273564 1379 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:21:03.276197 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:21:03.276333 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:21:06.046170 systemd[1]: Created slice system-sshd.slice. Dec 13 14:21:06.047311 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:40654.service. Dec 13 14:21:06.105488 sshd[1406]: Accepted publickey for core from 10.0.0.1 port 40654 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:06.107926 sshd[1406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:06.119736 systemd[1]: Created slice user-500.slice. Dec 13 14:21:06.120800 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 14:21:06.123569 systemd-logind[1303]: New session 1 of user core. Dec 13 14:21:06.130016 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 14:21:06.131316 systemd[1]: Starting user@500.service... Dec 13 14:21:06.134413 (systemd)[1411]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:06.191880 systemd[1411]: Queued start job for default target default.target. Dec 13 14:21:06.192066 systemd[1411]: Reached target paths.target. Dec 13 14:21:06.192082 systemd[1411]: Reached target sockets.target. Dec 13 14:21:06.192092 systemd[1411]: Reached target timers.target. Dec 13 14:21:06.192114 systemd[1411]: Reached target basic.target. Dec 13 14:21:06.192152 systemd[1411]: Reached target default.target. Dec 13 14:21:06.192174 systemd[1411]: Startup finished in 52ms. Dec 13 14:21:06.192236 systemd[1]: Started user@500.service. Dec 13 14:21:06.193094 systemd[1]: Started session-1.scope. Dec 13 14:21:06.241895 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:40660.service. Dec 13 14:21:06.292456 sshd[1420]: Accepted publickey for core from 10.0.0.1 port 40660 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:06.293769 sshd[1420]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:06.297302 systemd-logind[1303]: New session 2 of user core. Dec 13 14:21:06.298240 systemd[1]: Started session-2.scope. Dec 13 14:21:06.351816 sshd[1420]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:06.353799 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:40666.service. Dec 13 14:21:06.354854 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:40660.service: Deactivated successfully. Dec 13 14:21:06.355925 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 14:21:06.356314 systemd-logind[1303]: Session 2 logged out. Waiting for processes to exit. Dec 13 14:21:06.357109 systemd-logind[1303]: Removed session 2. Dec 13 14:21:06.396943 sshd[1425]: Accepted publickey for core from 10.0.0.1 port 40666 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:06.398088 sshd[1425]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:06.402003 systemd-logind[1303]: New session 3 of user core. Dec 13 14:21:06.402569 systemd[1]: Started session-3.scope. Dec 13 14:21:06.449908 sshd[1425]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:06.451906 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:40668.service. Dec 13 14:21:06.452617 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:40666.service: Deactivated successfully. Dec 13 14:21:06.453349 systemd-logind[1303]: Session 3 logged out. Waiting for processes to exit. Dec 13 14:21:06.453422 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 14:21:06.454467 systemd-logind[1303]: Removed session 3. Dec 13 14:21:06.494972 sshd[1432]: Accepted publickey for core from 10.0.0.1 port 40668 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:06.496066 sshd[1432]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:06.499450 systemd-logind[1303]: New session 4 of user core. Dec 13 14:21:06.499753 systemd[1]: Started session-4.scope. Dec 13 14:21:06.551912 sshd[1432]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:06.553971 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:40672.service. Dec 13 14:21:06.554814 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:40668.service: Deactivated successfully. Dec 13 14:21:06.555601 systemd-logind[1303]: Session 4 logged out. Waiting for processes to exit. Dec 13 14:21:06.555676 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 14:21:06.556574 systemd-logind[1303]: Removed session 4. Dec 13 14:21:06.597744 sshd[1439]: Accepted publickey for core from 10.0.0.1 port 40672 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:06.599110 sshd[1439]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:06.602518 systemd-logind[1303]: New session 5 of user core. Dec 13 14:21:06.603302 systemd[1]: Started session-5.scope. Dec 13 14:21:06.658405 sudo[1445]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 14:21:06.658620 sudo[1445]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:21:06.668985 dbus-daemon[1292]: avc: received setenforce notice (enforcing=1) Dec 13 14:21:06.669812 sudo[1445]: pam_unix(sudo:session): session closed for user root Dec 13 14:21:06.671628 sshd[1439]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:06.673802 systemd[1]: Started sshd@5-10.0.0.138:22-10.0.0.1:40680.service. Dec 13 14:21:06.674756 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:40672.service: Deactivated successfully. Dec 13 14:21:06.675730 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 14:21:06.676158 systemd-logind[1303]: Session 5 logged out. Waiting for processes to exit. Dec 13 14:21:06.676911 systemd-logind[1303]: Removed session 5. Dec 13 14:21:06.717467 sshd[1447]: Accepted publickey for core from 10.0.0.1 port 40680 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:06.718898 sshd[1447]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:06.723038 systemd-logind[1303]: New session 6 of user core. Dec 13 14:21:06.723594 systemd[1]: Started session-6.scope. Dec 13 14:21:06.774502 sudo[1454]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 14:21:06.774735 sudo[1454]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:21:06.777271 sudo[1454]: pam_unix(sudo:session): session closed for user root Dec 13 14:21:06.781240 sudo[1453]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 14:21:06.781441 sudo[1453]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:21:06.789188 systemd[1]: Stopping audit-rules.service... Dec 13 14:21:06.789000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 14:21:06.790953 kernel: kauditd_printk_skb: 78 callbacks suppressed Dec 13 14:21:06.790983 kernel: audit: type=1305 audit(1734099666.789:158): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Dec 13 14:21:06.791076 auditctl[1457]: No rules Dec 13 14:21:06.791384 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 14:21:06.791582 systemd[1]: Stopped audit-rules.service. Dec 13 14:21:06.789000 audit[1457]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc0aae8e0 a2=420 a3=0 items=0 ppid=1 pid=1457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:06.793011 systemd[1]: Starting audit-rules.service... Dec 13 14:21:06.796012 kernel: audit: type=1300 audit(1734099666.789:158): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc0aae8e0 a2=420 a3=0 items=0 ppid=1 pid=1457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:06.796059 kernel: audit: type=1327 audit(1734099666.789:158): proctitle=2F7362696E2F617564697463746C002D44 Dec 13 14:21:06.789000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Dec 13 14:21:06.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.799487 kernel: audit: type=1131 audit(1734099666.790:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.808236 augenrules[1475]: No rules Dec 13 14:21:06.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.810012 sudo[1453]: pam_unix(sudo:session): session closed for user root Dec 13 14:21:06.808923 systemd[1]: Finished audit-rules.service. Dec 13 14:21:06.811339 sshd[1447]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:06.809000 audit[1453]: USER_END pid=1453 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.814168 systemd[1]: Started sshd@6-10.0.0.138:22-10.0.0.1:40682.service. Dec 13 14:21:06.814866 kernel: audit: type=1130 audit(1734099666.808:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.814902 kernel: audit: type=1106 audit(1734099666.809:161): pid=1453 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.814918 kernel: audit: type=1104 audit(1734099666.809:162): pid=1453 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.809000 audit[1453]: CRED_DISP pid=1453 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.816508 systemd[1]: sshd@5-10.0.0.138:22-10.0.0.1:40680.service: Deactivated successfully. Dec 13 14:21:06.817417 kernel: audit: type=1130 audit(1734099666.813:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.138:22-10.0.0.1:40682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.138:22-10.0.0.1:40682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.817408 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 14:21:06.817675 systemd-logind[1303]: Session 6 logged out. Waiting for processes to exit. Dec 13 14:21:06.818360 systemd-logind[1303]: Removed session 6. Dec 13 14:21:06.814000 audit[1447]: USER_END pid=1447 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:21:06.823330 kernel: audit: type=1106 audit(1734099666.814:164): pid=1447 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:21:06.823379 kernel: audit: type=1104 audit(1734099666.814:165): pid=1447 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:21:06.814000 audit[1447]: CRED_DISP pid=1447 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:21:06.815000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.138:22-10.0.0.1:40680 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.858000 audit[1480]: USER_ACCT pid=1480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:21:06.859483 sshd[1480]: Accepted publickey for core from 10.0.0.1 port 40682 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:21:06.859000 audit[1480]: CRED_ACQ pid=1480 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:21:06.859000 audit[1480]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc99d86f0 a2=3 a3=1 items=0 ppid=1 pid=1480 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:06.859000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:21:06.860817 sshd[1480]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:21:06.864526 systemd[1]: Started session-7.scope. Dec 13 14:21:06.864697 systemd-logind[1303]: New session 7 of user core. Dec 13 14:21:06.867000 audit[1480]: USER_START pid=1480 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:21:06.868000 audit[1485]: CRED_ACQ pid=1485 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:21:06.914000 audit[1486]: USER_ACCT pid=1486 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.914000 audit[1486]: CRED_REFR pid=1486 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.915142 sudo[1486]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 14:21:06.915357 sudo[1486]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 14:21:06.916000 audit[1486]: USER_START pid=1486 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:21:06.973750 systemd[1]: Starting docker.service... Dec 13 14:21:07.057488 env[1498]: time="2024-12-13T14:21:07.057436487Z" level=info msg="Starting up" Dec 13 14:21:07.059098 env[1498]: time="2024-12-13T14:21:07.059034011Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:21:07.059188 env[1498]: time="2024-12-13T14:21:07.059173470Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:21:07.059253 env[1498]: time="2024-12-13T14:21:07.059237923Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:21:07.059312 env[1498]: time="2024-12-13T14:21:07.059300023Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:21:07.063107 env[1498]: time="2024-12-13T14:21:07.063081389Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 14:21:07.063107 env[1498]: time="2024-12-13T14:21:07.063105083Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 14:21:07.063199 env[1498]: time="2024-12-13T14:21:07.063120343Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 14:21:07.063199 env[1498]: time="2024-12-13T14:21:07.063129797Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 14:21:07.067678 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport632614134-merged.mount: Deactivated successfully. Dec 13 14:21:07.231116 env[1498]: time="2024-12-13T14:21:07.231059859Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 14:21:07.231116 env[1498]: time="2024-12-13T14:21:07.231090811Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 14:21:07.231357 env[1498]: time="2024-12-13T14:21:07.231221326Z" level=info msg="Loading containers: start." Dec 13 14:21:07.274000 audit[1532]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1532 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.274000 audit[1532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffdea994b0 a2=0 a3=1 items=0 ppid=1498 pid=1532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.274000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Dec 13 14:21:07.276000 audit[1534]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1534 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.276000 audit[1534]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffcdc39140 a2=0 a3=1 items=0 ppid=1498 pid=1534 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.276000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Dec 13 14:21:07.278000 audit[1536]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1536 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.278000 audit[1536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff599da80 a2=0 a3=1 items=0 ppid=1498 pid=1536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.278000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:21:07.280000 audit[1538]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.280000 audit[1538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc7d693b0 a2=0 a3=1 items=0 ppid=1498 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.280000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:21:07.282000 audit[1540]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.282000 audit[1540]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffe5c09440 a2=0 a3=1 items=0 ppid=1498 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.282000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6A0052455455524E Dec 13 14:21:07.307000 audit[1545]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_rule pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.307000 audit[1545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffd852b970 a2=0 a3=1 items=0 ppid=1498 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.307000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D49534F4C4154494F4E2D53544147452D32002D6A0052455455524E Dec 13 14:21:07.312000 audit[1547]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1547 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.312000 audit[1547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffeb0460b0 a2=0 a3=1 items=0 ppid=1498 pid=1547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.312000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Dec 13 14:21:07.314000 audit[1549]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1549 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.314000 audit[1549]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd51f4020 a2=0 a3=1 items=0 ppid=1498 pid=1549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.314000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Dec 13 14:21:07.316000 audit[1551]: NETFILTER_CFG table=filter:10 family=2 entries=2 op=nft_register_chain pid=1551 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.316000 audit[1551]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=308 a0=3 a1=ffffde6d5ea0 a2=0 a3=1 items=0 ppid=1498 pid=1551 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.316000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:21:07.322000 audit[1555]: NETFILTER_CFG table=filter:11 family=2 entries=1 op=nft_unregister_rule pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.322000 audit[1555]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffedee1700 a2=0 a3=1 items=0 ppid=1498 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.322000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:21:07.331000 audit[1556]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.331000 audit[1556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff70d17a0 a2=0 a3=1 items=0 ppid=1498 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.331000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:21:07.340858 kernel: Initializing XFRM netlink socket Dec 13 14:21:07.364533 env[1498]: time="2024-12-13T14:21:07.364500691Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 14:21:07.377000 audit[1564]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.377000 audit[1564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=492 a0=3 a1=fffff673b980 a2=0 a3=1 items=0 ppid=1498 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.377000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Dec 13 14:21:07.392000 audit[1567]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1567 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.392000 audit[1567]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffcf579c50 a2=0 a3=1 items=0 ppid=1498 pid=1567 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.392000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Dec 13 14:21:07.394000 audit[1570]: NETFILTER_CFG table=filter:15 family=2 entries=1 op=nft_register_rule pid=1570 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.394000 audit[1570]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffea1bb7e0 a2=0 a3=1 items=0 ppid=1498 pid=1570 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.394000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B657230002D6F00646F636B657230002D6A00414343455054 Dec 13 14:21:07.396000 audit[1572]: NETFILTER_CFG table=filter:16 family=2 entries=1 op=nft_register_rule pid=1572 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.396000 audit[1572]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd3ea63b0 a2=0 a3=1 items=0 ppid=1498 pid=1572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.396000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6900646F636B6572300000002D6F00646F636B657230002D6A00414343455054 Dec 13 14:21:07.398000 audit[1574]: NETFILTER_CFG table=nat:17 family=2 entries=2 op=nft_register_chain pid=1574 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.398000 audit[1574]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=356 a0=3 a1=ffffe44246b0 a2=0 a3=1 items=0 ppid=1498 pid=1574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.398000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Dec 13 14:21:07.399000 audit[1576]: NETFILTER_CFG table=nat:18 family=2 entries=2 op=nft_register_chain pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.399000 audit[1576]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=444 a0=3 a1=ffffe3fe4480 a2=0 a3=1 items=0 ppid=1498 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.399000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Dec 13 14:21:07.400000 audit[1578]: NETFILTER_CFG table=filter:19 family=2 entries=1 op=nft_register_rule pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.400000 audit[1578]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=304 a0=3 a1=ffffc954db50 a2=0 a3=1 items=0 ppid=1498 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.400000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6A00444F434B4552 Dec 13 14:21:07.409000 audit[1581]: NETFILTER_CFG table=filter:20 family=2 entries=1 op=nft_register_rule pid=1581 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.409000 audit[1581]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=508 a0=3 a1=ffffe918fc90 a2=0 a3=1 items=0 ppid=1498 pid=1581 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.409000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Dec 13 14:21:07.410000 audit[1583]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.410000 audit[1583]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=240 a0=3 a1=ffffe4731c20 a2=0 a3=1 items=0 ppid=1498 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.410000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Dec 13 14:21:07.412000 audit[1585]: NETFILTER_CFG table=filter:22 family=2 entries=1 op=nft_register_rule pid=1585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.412000 audit[1585]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc073fd30 a2=0 a3=1 items=0 ppid=1498 pid=1585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.412000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Dec 13 14:21:07.414000 audit[1587]: NETFILTER_CFG table=filter:23 family=2 entries=1 op=nft_register_rule pid=1587 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.414000 audit[1587]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffe9efda20 a2=0 a3=1 items=0 ppid=1498 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.414000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Dec 13 14:21:07.415329 systemd-networkd[1096]: docker0: Link UP Dec 13 14:21:07.420000 audit[1591]: NETFILTER_CFG table=filter:24 family=2 entries=1 op=nft_unregister_rule pid=1591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.420000 audit[1591]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffcc4c6f60 a2=0 a3=1 items=0 ppid=1498 pid=1591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.420000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4400464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:21:07.433000 audit[1592]: NETFILTER_CFG table=filter:25 family=2 entries=1 op=nft_register_rule pid=1592 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:07.433000 audit[1592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc7dcbea0 a2=0 a3=1 items=0 ppid=1498 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:07.433000 audit: PROCTITLE proctitle=2F7573722F7362696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Dec 13 14:21:07.434919 env[1498]: time="2024-12-13T14:21:07.434896496Z" level=info msg="Loading containers: done." Dec 13 14:21:07.450775 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck4076679430-merged.mount: Deactivated successfully. Dec 13 14:21:07.455499 env[1498]: time="2024-12-13T14:21:07.455459142Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 14:21:07.455627 env[1498]: time="2024-12-13T14:21:07.455606172Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 14:21:07.455712 env[1498]: time="2024-12-13T14:21:07.455696478Z" level=info msg="Daemon has completed initialization" Dec 13 14:21:07.467638 systemd[1]: Started docker.service. Dec 13 14:21:07.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:07.473687 env[1498]: time="2024-12-13T14:21:07.473572955Z" level=info msg="API listen on /run/docker.sock" Dec 13 14:21:08.154981 env[1320]: time="2024-12-13T14:21:08.154933417Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 14:21:08.816209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3451248971.mount: Deactivated successfully. Dec 13 14:21:10.178852 env[1320]: time="2024-12-13T14:21:10.178788455Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:10.181383 env[1320]: time="2024-12-13T14:21:10.181346987Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:10.182972 env[1320]: time="2024-12-13T14:21:10.182945951Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:10.185085 env[1320]: time="2024-12-13T14:21:10.185061478Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:10.185797 env[1320]: time="2024-12-13T14:21:10.185759862Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:50c86b7f73fdd28bacd4abf45260c9d3abc3b57eb038fa61fc45b5d0f2763e6f\"" Dec 13 14:21:10.195280 env[1320]: time="2024-12-13T14:21:10.195254661Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 14:21:12.052569 env[1320]: time="2024-12-13T14:21:12.052506442Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:12.053782 env[1320]: time="2024-12-13T14:21:12.053741154Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:12.055494 env[1320]: time="2024-12-13T14:21:12.055453219Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:12.057724 env[1320]: time="2024-12-13T14:21:12.057691234Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:12.058521 env[1320]: time="2024-12-13T14:21:12.058493108Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:2d47abaa6ccc533f84ef74fff6d509de10bb040317351b45afe95a8021a1ddf7\"" Dec 13 14:21:12.067335 env[1320]: time="2024-12-13T14:21:12.067309322Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 14:21:13.270044 env[1320]: time="2024-12-13T14:21:13.269994809Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:13.272070 env[1320]: time="2024-12-13T14:21:13.271626214Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:13.273712 env[1320]: time="2024-12-13T14:21:13.273687428Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:13.275117 env[1320]: time="2024-12-13T14:21:13.275082972Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:13.275956 env[1320]: time="2024-12-13T14:21:13.275932360Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:ae633c52a23907b58f7a7867d2cccf3d3f5ebd8977beb6788e20fbecd3f446db\"" Dec 13 14:21:13.285199 env[1320]: time="2024-12-13T14:21:13.285157188Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 14:21:13.527110 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 14:21:13.528109 kernel: kauditd_printk_skb: 84 callbacks suppressed Dec 13 14:21:13.528151 kernel: audit: type=1130 audit(1734099673.525:200): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:13.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:13.527290 systemd[1]: Stopped kubelet.service. Dec 13 14:21:13.528850 systemd[1]: Starting kubelet.service... Dec 13 14:21:13.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:13.535027 kernel: audit: type=1131 audit(1734099673.525:201): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:13.613382 systemd[1]: Started kubelet.service. Dec 13 14:21:13.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:13.616882 kernel: audit: type=1130 audit(1734099673.612:202): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:13.725523 kubelet[1663]: E1213 14:21:13.725471 1663 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:21:13.728340 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:21:13.728480 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:21:13.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:21:13.731866 kernel: audit: type=1131 audit(1734099673.727:203): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:21:14.398666 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2889614531.mount: Deactivated successfully. Dec 13 14:21:14.927148 env[1320]: time="2024-12-13T14:21:14.927103922Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:14.928317 env[1320]: time="2024-12-13T14:21:14.928287813Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:14.929451 env[1320]: time="2024-12-13T14:21:14.929418192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:14.930875 env[1320]: time="2024-12-13T14:21:14.930840386Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:14.931244 env[1320]: time="2024-12-13T14:21:14.931220646Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:768ee8cfd9311233d038d18430c18136e1ae4dd2e6de40fcf1c670bba2da6d06\"" Dec 13 14:21:14.940536 env[1320]: time="2024-12-13T14:21:14.940507499Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 14:21:15.487503 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243370286.mount: Deactivated successfully. Dec 13 14:21:16.303215 env[1320]: time="2024-12-13T14:21:16.303165126Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:16.305067 env[1320]: time="2024-12-13T14:21:16.305034947Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:16.307188 env[1320]: time="2024-12-13T14:21:16.307157098Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:16.308767 env[1320]: time="2024-12-13T14:21:16.308729056Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:16.309657 env[1320]: time="2024-12-13T14:21:16.309625073Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" Dec 13 14:21:16.317534 env[1320]: time="2024-12-13T14:21:16.317509160Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 14:21:16.817497 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3005230570.mount: Deactivated successfully. Dec 13 14:21:16.821833 env[1320]: time="2024-12-13T14:21:16.821792622Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:16.823113 env[1320]: time="2024-12-13T14:21:16.823089184Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:16.824421 env[1320]: time="2024-12-13T14:21:16.824384315Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:16.825935 env[1320]: time="2024-12-13T14:21:16.825900836Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:16.826542 env[1320]: time="2024-12-13T14:21:16.826512988Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:829e9de338bd5fdd3f16f68f83a9fb288fbc8453e881e5d5cfd0f6f2ff72b43e\"" Dec 13 14:21:16.834914 env[1320]: time="2024-12-13T14:21:16.834886701Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 14:21:17.351538 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2319178840.mount: Deactivated successfully. Dec 13 14:21:19.428997 env[1320]: time="2024-12-13T14:21:19.428948650Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:19.430706 env[1320]: time="2024-12-13T14:21:19.430669782Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:19.432509 env[1320]: time="2024-12-13T14:21:19.432484629Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:19.434227 env[1320]: time="2024-12-13T14:21:19.434198669Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:19.435940 env[1320]: time="2024-12-13T14:21:19.435910837Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:79f8d13ae8b8839cadfb2f83416935f5184206d386028e2d1263577f0ab3620b\"" Dec 13 14:21:23.979376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 14:21:23.979556 systemd[1]: Stopped kubelet.service. Dec 13 14:21:23.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:23.981027 systemd[1]: Starting kubelet.service... Dec 13 14:21:23.977000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:23.984662 kernel: audit: type=1130 audit(1734099683.977:204): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:23.984751 kernel: audit: type=1131 audit(1734099683.977:205): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:24.061616 systemd[1]: Started kubelet.service. Dec 13 14:21:24.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:24.064875 kernel: audit: type=1130 audit(1734099684.060:206): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:24.101882 kubelet[1774]: E1213 14:21:24.101810 1774 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 14:21:24.104366 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 14:21:24.104498 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 14:21:24.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:21:24.107869 kernel: audit: type=1131 audit(1734099684.103:207): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Dec 13 14:21:25.723393 systemd[1]: Stopped kubelet.service. Dec 13 14:21:25.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:25.722000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:25.726035 systemd[1]: Starting kubelet.service... Dec 13 14:21:25.728691 kernel: audit: type=1130 audit(1734099685.722:208): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:25.728764 kernel: audit: type=1131 audit(1734099685.722:209): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:25.744325 systemd[1]: Reloading. Dec 13 14:21:25.793855 /usr/lib/systemd/system-generators/torcx-generator[1812]: time="2024-12-13T14:21:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:21:25.793886 /usr/lib/systemd/system-generators/torcx-generator[1812]: time="2024-12-13T14:21:25Z" level=info msg="torcx already run" Dec 13 14:21:25.900874 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:21:25.900894 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:21:25.916235 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:21:25.975577 systemd[1]: Started kubelet.service. Dec 13 14:21:25.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:25.978865 kernel: audit: type=1130 audit(1734099685.975:210): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:25.979882 systemd[1]: Stopping kubelet.service... Dec 13 14:21:25.980991 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:21:25.981360 systemd[1]: Stopped kubelet.service. Dec 13 14:21:25.980000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:25.983118 systemd[1]: Starting kubelet.service... Dec 13 14:21:25.984906 kernel: audit: type=1131 audit(1734099685.980:211): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:26.062285 systemd[1]: Started kubelet.service. Dec 13 14:21:26.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:26.065882 kernel: audit: type=1130 audit(1734099686.061:212): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:26.108494 kubelet[1871]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:21:26.108494 kubelet[1871]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:21:26.108494 kubelet[1871]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:21:26.108875 kubelet[1871]: I1213 14:21:26.108558 1871 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:21:26.936357 kubelet[1871]: I1213 14:21:26.936321 1871 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:21:26.936357 kubelet[1871]: I1213 14:21:26.936349 1871 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:21:26.936560 kubelet[1871]: I1213 14:21:26.936533 1871 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:21:26.961957 kubelet[1871]: E1213 14:21:26.961935 1871 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.138:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.138:6443: connect: connection refused Dec 13 14:21:26.962105 kubelet[1871]: I1213 14:21:26.962081 1871 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:21:26.972415 kubelet[1871]: I1213 14:21:26.972385 1871 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:21:26.973761 kubelet[1871]: I1213 14:21:26.973731 1871 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:21:26.974053 kubelet[1871]: I1213 14:21:26.974031 1871 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:21:26.974152 kubelet[1871]: I1213 14:21:26.974055 1871 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:21:26.974152 kubelet[1871]: I1213 14:21:26.974069 1871 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:21:26.974211 kubelet[1871]: I1213 14:21:26.974175 1871 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:21:26.976749 kubelet[1871]: I1213 14:21:26.976727 1871 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:21:26.976883 kubelet[1871]: I1213 14:21:26.976870 1871 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:21:26.976971 kubelet[1871]: I1213 14:21:26.976959 1871 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:21:26.977045 kubelet[1871]: I1213 14:21:26.977035 1871 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:21:26.977233 kubelet[1871]: W1213 14:21:26.977148 1871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Dec 13 14:21:26.977233 kubelet[1871]: E1213 14:21:26.977198 1871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.138:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Dec 13 14:21:26.977664 kubelet[1871]: W1213 14:21:26.977617 1871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Dec 13 14:21:26.977664 kubelet[1871]: E1213 14:21:26.977662 1871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Dec 13 14:21:26.978071 kubelet[1871]: I1213 14:21:26.978054 1871 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:21:26.978495 kubelet[1871]: I1213 14:21:26.978482 1871 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:21:26.979087 kubelet[1871]: W1213 14:21:26.979065 1871 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 14:21:26.979887 kubelet[1871]: I1213 14:21:26.979838 1871 server.go:1256] "Started kubelet" Dec 13 14:21:26.980177 kubelet[1871]: I1213 14:21:26.980162 1871 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:21:26.981061 kubelet[1871]: I1213 14:21:26.981045 1871 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:21:26.981000 audit[1871]: AVC avc: denied { mac_admin } for pid=1871 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:21:26.991929 kernel: audit: type=1400 audit(1734099686.981:213): avc: denied { mac_admin } for pid=1871 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:21:26.981000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:21:26.981000 audit[1871]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000958090 a1=4000079db8 a2=4000958060 a3=25 items=0 ppid=1 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:26.981000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:21:26.985000 audit[1871]: AVC avc: denied { mac_admin } for pid=1871 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:21:26.985000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:21:26.985000 audit[1871]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400010fde0 a1=4000079dd0 a2=4000958120 a3=25 items=0 ppid=1 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:26.985000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:21:26.992315 kubelet[1871]: I1213 14:21:26.990879 1871 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:21:26.992315 kubelet[1871]: I1213 14:21:26.990933 1871 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:21:26.994142 kubelet[1871]: E1213 14:21:26.994114 1871 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:21:26.994430 kubelet[1871]: I1213 14:21:26.994414 1871 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:21:26.994517 kubelet[1871]: I1213 14:21:26.994483 1871 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:21:26.994770 kubelet[1871]: I1213 14:21:26.994743 1871 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:21:26.994918 kubelet[1871]: I1213 14:21:26.994908 1871 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:21:26.995227 kubelet[1871]: I1213 14:21:26.995085 1871 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:21:26.995295 kubelet[1871]: I1213 14:21:26.995262 1871 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:21:26.995469 kubelet[1871]: E1213 14:21:26.995341 1871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="200ms" Dec 13 14:21:26.995680 kubelet[1871]: W1213 14:21:26.995626 1871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Dec 13 14:21:26.995733 kubelet[1871]: E1213 14:21:26.995687 1871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.138:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Dec 13 14:21:26.998349 kubelet[1871]: I1213 14:21:26.998330 1871 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:21:26.998349 kubelet[1871]: I1213 14:21:26.998348 1871 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:21:26.998454 kubelet[1871]: I1213 14:21:26.998416 1871 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:21:26.997000 audit[1884]: NETFILTER_CFG table=mangle:26 family=2 entries=2 op=nft_register_chain pid=1884 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:26.997000 audit[1884]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff35c24d0 a2=0 a3=1 items=0 ppid=1871 pid=1884 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:26.997000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:21:26.998000 audit[1885]: NETFILTER_CFG table=filter:27 family=2 entries=1 op=nft_register_chain pid=1885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:26.998000 audit[1885]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde76bfe0 a2=0 a3=1 items=0 ppid=1871 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:26.998000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:21:27.000000 audit[1888]: NETFILTER_CFG table=filter:28 family=2 entries=2 op=nft_register_chain pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:27.000000 audit[1888]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd22fd5f0 a2=0 a3=1 items=0 ppid=1871 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:27.000000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:21:27.001000 audit[1890]: NETFILTER_CFG table=filter:29 family=2 entries=2 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:27.001000 audit[1890]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd2bc3e90 a2=0 a3=1 items=0 ppid=1871 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:27.001000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:21:27.012346 kubelet[1871]: E1213 14:21:27.012301 1871 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.138:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.138:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1810c27a478c52bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2024-12-13 14:21:26.979818173 +0000 UTC m=+0.914175698,LastTimestamp:2024-12-13 14:21:26.979818173 +0000 UTC m=+0.914175698,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Dec 13 14:21:27.016500 kubelet[1871]: I1213 14:21:27.016478 1871 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:21:27.016500 kubelet[1871]: I1213 14:21:27.016497 1871 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:21:27.016582 kubelet[1871]: I1213 14:21:27.016514 1871 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:21:27.016000 audit[1897]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:27.016000 audit[1897]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe1088650 a2=0 a3=1 items=0 ppid=1871 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:27.016000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Dec 13 14:21:27.017605 kubelet[1871]: I1213 14:21:27.017581 1871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:21:27.017000 audit[1898]: NETFILTER_CFG table=mangle:31 family=10 entries=2 op=nft_register_chain pid=1898 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:27.017000 audit[1898]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd96841c0 a2=0 a3=1 items=0 ppid=1871 pid=1898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:27.017000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Dec 13 14:21:27.018513 kubelet[1871]: I1213 14:21:27.018492 1871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:21:27.018551 kubelet[1871]: I1213 14:21:27.018514 1871 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:21:27.018551 kubelet[1871]: I1213 14:21:27.018529 1871 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:21:27.018601 kubelet[1871]: E1213 14:21:27.018568 1871 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:21:27.018000 audit[1899]: NETFILTER_CFG table=mangle:32 family=2 entries=1 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:27.018000 audit[1899]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe0b7eea0 a2=0 a3=1 items=0 ppid=1871 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:27.018000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:21:27.019090 kubelet[1871]: W1213 14:21:27.018993 1871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Dec 13 14:21:27.019090 kubelet[1871]: E1213 14:21:27.019050 1871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.138:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Dec 13 14:21:27.019000 audit[1901]: NETFILTER_CFG table=mangle:33 family=10 entries=1 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:27.019000 audit[1901]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc5c56dc0 a2=0 a3=1 items=0 ppid=1871 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:27.019000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Dec 13 14:21:27.019000 audit[1902]: NETFILTER_CFG table=nat:34 family=2 entries=1 op=nft_register_chain pid=1902 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:27.019000 audit[1902]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7403810 a2=0 a3=1 items=0 ppid=1871 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:27.019000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:21:27.020000 audit[1904]: NETFILTER_CFG table=filter:35 family=2 entries=1 op=nft_register_chain pid=1904 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:27.020000 audit[1904]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe7ab1030 a2=0 a3=1 items=0 ppid=1871 pid=1904 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:27.020000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:21:27.020000 audit[1903]: NETFILTER_CFG table=nat:36 family=10 entries=2 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:27.020000 audit[1903]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=128 a0=3 a1=ffffdc659c00 a2=0 a3=1 items=0 ppid=1871 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:27.020000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Dec 13 14:21:27.021000 audit[1905]: NETFILTER_CFG table=filter:37 family=10 entries=2 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:27.021000 audit[1905]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff43b91e0 a2=0 a3=1 items=0 ppid=1871 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:27.021000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Dec 13 14:21:27.079945 kubelet[1871]: I1213 14:21:27.079887 1871 policy_none.go:49] "None policy: Start" Dec 13 14:21:27.080681 kubelet[1871]: I1213 14:21:27.080664 1871 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:21:27.080733 kubelet[1871]: I1213 14:21:27.080712 1871 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:21:27.086046 kubelet[1871]: I1213 14:21:27.086014 1871 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:21:27.085000 audit[1871]: AVC avc: denied { mac_admin } for pid=1871 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:21:27.085000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:21:27.085000 audit[1871]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000de63c0 a1=4000f48de0 a2=4000de6390 a3=25 items=0 ppid=1 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:27.085000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:21:27.086271 kubelet[1871]: I1213 14:21:27.086101 1871 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:21:27.086331 kubelet[1871]: I1213 14:21:27.086296 1871 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:21:27.087354 kubelet[1871]: E1213 14:21:27.087332 1871 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Dec 13 14:21:27.096377 kubelet[1871]: I1213 14:21:27.096345 1871 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:21:27.098906 kubelet[1871]: E1213 14:21:27.098877 1871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Dec 13 14:21:27.119003 kubelet[1871]: I1213 14:21:27.118983 1871 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:21:27.120046 kubelet[1871]: I1213 14:21:27.120016 1871 topology_manager.go:215] "Topology Admit Handler" podUID="c61de71a704b2d975a657ce01801e9f9" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:21:27.120790 kubelet[1871]: I1213 14:21:27.120771 1871 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:21:27.196225 kubelet[1871]: E1213 14:21:27.196155 1871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="400ms" Dec 13 14:21:27.296439 kubelet[1871]: I1213 14:21:27.296408 1871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c61de71a704b2d975a657ce01801e9f9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c61de71a704b2d975a657ce01801e9f9\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:21:27.296439 kubelet[1871]: I1213 14:21:27.296446 1871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:21:27.296612 kubelet[1871]: I1213 14:21:27.296467 1871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:21:27.296612 kubelet[1871]: I1213 14:21:27.296491 1871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:21:27.296612 kubelet[1871]: I1213 14:21:27.296510 1871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:21:27.296612 kubelet[1871]: I1213 14:21:27.296589 1871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c61de71a704b2d975a657ce01801e9f9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c61de71a704b2d975a657ce01801e9f9\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:21:27.296706 kubelet[1871]: I1213 14:21:27.296637 1871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c61de71a704b2d975a657ce01801e9f9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c61de71a704b2d975a657ce01801e9f9\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:21:27.296706 kubelet[1871]: I1213 14:21:27.296660 1871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:21:27.296706 kubelet[1871]: I1213 14:21:27.296692 1871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:21:27.300129 kubelet[1871]: I1213 14:21:27.300105 1871 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:21:27.300475 kubelet[1871]: E1213 14:21:27.300454 1871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Dec 13 14:21:27.425976 kubelet[1871]: E1213 14:21:27.425946 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:27.426078 kubelet[1871]: E1213 14:21:27.426059 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:27.426302 kubelet[1871]: E1213 14:21:27.426266 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:27.426475 env[1320]: time="2024-12-13T14:21:27.426412869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,}" Dec 13 14:21:27.427002 env[1320]: time="2024-12-13T14:21:27.426859073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,}" Dec 13 14:21:27.427002 env[1320]: time="2024-12-13T14:21:27.426891031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c61de71a704b2d975a657ce01801e9f9,Namespace:kube-system,Attempt:0,}" Dec 13 14:21:27.596761 kubelet[1871]: E1213 14:21:27.596692 1871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.138:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.138:6443: connect: connection refused" interval="800ms" Dec 13 14:21:27.702108 kubelet[1871]: I1213 14:21:27.702078 1871 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:21:27.702384 kubelet[1871]: E1213 14:21:27.702354 1871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.138:6443/api/v1/nodes\": dial tcp 10.0.0.138:6443: connect: connection refused" node="localhost" Dec 13 14:21:27.902227 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount638716220.mount: Deactivated successfully. Dec 13 14:21:27.907249 env[1320]: time="2024-12-13T14:21:27.907207546Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:27.909180 env[1320]: time="2024-12-13T14:21:27.909085437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:27.909979 env[1320]: time="2024-12-13T14:21:27.909950362Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:27.911117 env[1320]: time="2024-12-13T14:21:27.911083089Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:27.912605 env[1320]: time="2024-12-13T14:21:27.912568745Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:27.913244 env[1320]: time="2024-12-13T14:21:27.913216160Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:27.913857 env[1320]: time="2024-12-13T14:21:27.913823349Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:27.915889 env[1320]: time="2024-12-13T14:21:27.915859389Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:27.918884 env[1320]: time="2024-12-13T14:21:27.918831459Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:27.921160 env[1320]: time="2024-12-13T14:21:27.921132546Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:27.922030 env[1320]: time="2024-12-13T14:21:27.922002344Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:27.922767 env[1320]: time="2024-12-13T14:21:27.922741077Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:27.925829 kubelet[1871]: W1213 14:21:27.925761 1871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Dec 13 14:21:27.925829 kubelet[1871]: E1213 14:21:27.925811 1871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.138:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.138:6443: connect: connection refused Dec 13 14:21:27.947082 env[1320]: time="2024-12-13T14:21:27.946992085Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:21:27.947082 env[1320]: time="2024-12-13T14:21:27.947047810Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:21:27.947082 env[1320]: time="2024-12-13T14:21:27.947058276Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:21:27.947430 env[1320]: time="2024-12-13T14:21:27.947202284Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:21:27.947430 env[1320]: time="2024-12-13T14:21:27.947236478Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:21:27.947430 env[1320]: time="2024-12-13T14:21:27.947246785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:21:27.947539 env[1320]: time="2024-12-13T14:21:27.947454507Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/119020e6d024fe7e7bcee2456c69ef6589ed9a5da858df7e08beb2bba53a2c4f pid=1924 runtime=io.containerd.runc.v2 Dec 13 14:21:27.947634 env[1320]: time="2024-12-13T14:21:27.947592643Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0d8a2b024bbb065c3ad1a98c1fca68241c4ced294b657887488184023481628a pid=1919 runtime=io.containerd.runc.v2 Dec 13 14:21:27.953347 env[1320]: time="2024-12-13T14:21:27.951024619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:21:27.953347 env[1320]: time="2024-12-13T14:21:27.951060331Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:21:27.953347 env[1320]: time="2024-12-13T14:21:27.951070038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:21:27.953347 env[1320]: time="2024-12-13T14:21:27.951174339Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6c39620248da2a87e951415cd3cbc61caac55866c58ce679027c7b438bc3f956 pid=1950 runtime=io.containerd.runc.v2 Dec 13 14:21:28.021941 env[1320]: time="2024-12-13T14:21:28.021430618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:4f8e0d694c07e04969646aa3c152c34a,Namespace:kube-system,Attempt:0,} returns sandbox id \"119020e6d024fe7e7bcee2456c69ef6589ed9a5da858df7e08beb2bba53a2c4f\"" Dec 13 14:21:28.022673 kubelet[1871]: E1213 14:21:28.022489 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:28.023458 env[1320]: time="2024-12-13T14:21:28.023296597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:c4144e8f85b2123a6afada0c1705bbba,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d8a2b024bbb065c3ad1a98c1fca68241c4ced294b657887488184023481628a\"" Dec 13 14:21:28.023830 kubelet[1871]: E1213 14:21:28.023815 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:28.025896 env[1320]: time="2024-12-13T14:21:28.025025057Z" level=info msg="CreateContainer within sandbox \"119020e6d024fe7e7bcee2456c69ef6589ed9a5da858df7e08beb2bba53a2c4f\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 14:21:28.025896 env[1320]: time="2024-12-13T14:21:28.025478328Z" level=info msg="CreateContainer within sandbox \"0d8a2b024bbb065c3ad1a98c1fca68241c4ced294b657887488184023481628a\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 14:21:28.027356 env[1320]: time="2024-12-13T14:21:28.027319736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:c61de71a704b2d975a657ce01801e9f9,Namespace:kube-system,Attempt:0,} returns sandbox id \"6c39620248da2a87e951415cd3cbc61caac55866c58ce679027c7b438bc3f956\"" Dec 13 14:21:28.027864 kubelet[1871]: E1213 14:21:28.027816 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:28.030136 env[1320]: time="2024-12-13T14:21:28.030106839Z" level=info msg="CreateContainer within sandbox \"6c39620248da2a87e951415cd3cbc61caac55866c58ce679027c7b438bc3f956\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 14:21:28.041830 env[1320]: time="2024-12-13T14:21:28.041785552Z" level=info msg="CreateContainer within sandbox \"0d8a2b024bbb065c3ad1a98c1fca68241c4ced294b657887488184023481628a\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9e8f927a52fa70d37d50720dbb70f40e6fe8e7cfbcd619fc299544b8a5da05e5\"" Dec 13 14:21:28.042546 env[1320]: time="2024-12-13T14:21:28.042469033Z" level=info msg="StartContainer for \"9e8f927a52fa70d37d50720dbb70f40e6fe8e7cfbcd619fc299544b8a5da05e5\"" Dec 13 14:21:28.044695 env[1320]: time="2024-12-13T14:21:28.044664148Z" level=info msg="CreateContainer within sandbox \"119020e6d024fe7e7bcee2456c69ef6589ed9a5da858df7e08beb2bba53a2c4f\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e552ab5b4f1a67fc0ff98e0cad4c0466db4ade77a2eaae9532b7741076fe397d\"" Dec 13 14:21:28.045202 env[1320]: time="2024-12-13T14:21:28.045151339Z" level=info msg="StartContainer for \"e552ab5b4f1a67fc0ff98e0cad4c0466db4ade77a2eaae9532b7741076fe397d\"" Dec 13 14:21:28.045466 env[1320]: time="2024-12-13T14:21:28.045430932Z" level=info msg="CreateContainer within sandbox \"6c39620248da2a87e951415cd3cbc61caac55866c58ce679027c7b438bc3f956\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"cad21b6320637ca7794f9a8c598ebbfbf563376504081e10fbc335147057b3b5\"" Dec 13 14:21:28.045897 env[1320]: time="2024-12-13T14:21:28.045873655Z" level=info msg="StartContainer for \"cad21b6320637ca7794f9a8c598ebbfbf563376504081e10fbc335147057b3b5\"" Dec 13 14:21:28.142862 env[1320]: time="2024-12-13T14:21:28.142809899Z" level=info msg="StartContainer for \"e552ab5b4f1a67fc0ff98e0cad4c0466db4ade77a2eaae9532b7741076fe397d\" returns successfully" Dec 13 14:21:28.154376 env[1320]: time="2024-12-13T14:21:28.154298794Z" level=info msg="StartContainer for \"cad21b6320637ca7794f9a8c598ebbfbf563376504081e10fbc335147057b3b5\" returns successfully" Dec 13 14:21:28.168939 env[1320]: time="2024-12-13T14:21:28.168903927Z" level=info msg="StartContainer for \"9e8f927a52fa70d37d50720dbb70f40e6fe8e7cfbcd619fc299544b8a5da05e5\" returns successfully" Dec 13 14:21:28.504514 kubelet[1871]: I1213 14:21:28.504400 1871 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:21:29.024697 kubelet[1871]: E1213 14:21:29.024664 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:29.027325 kubelet[1871]: E1213 14:21:29.027295 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:29.028918 kubelet[1871]: E1213 14:21:29.028894 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:29.681091 kubelet[1871]: E1213 14:21:29.681039 1871 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Dec 13 14:21:29.756730 kubelet[1871]: I1213 14:21:29.756700 1871 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:21:29.985640 kubelet[1871]: I1213 14:21:29.985549 1871 apiserver.go:52] "Watching apiserver" Dec 13 14:21:29.995510 kubelet[1871]: I1213 14:21:29.995480 1871 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:21:30.035751 kubelet[1871]: E1213 14:21:30.035706 1871 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Dec 13 14:21:30.036243 kubelet[1871]: E1213 14:21:30.036209 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:30.134256 kubelet[1871]: E1213 14:21:30.134197 1871 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Dec 13 14:21:30.134524 kubelet[1871]: E1213 14:21:30.134500 1871 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:32.184833 systemd[1]: Reloading. Dec 13 14:21:32.225832 /usr/lib/systemd/system-generators/torcx-generator[2170]: time="2024-12-13T14:21:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 14:21:32.225873 /usr/lib/systemd/system-generators/torcx-generator[2170]: time="2024-12-13T14:21:32Z" level=info msg="torcx already run" Dec 13 14:21:32.284700 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 14:21:32.284720 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 14:21:32.299991 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 14:21:32.367198 kubelet[1871]: I1213 14:21:32.367126 1871 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:21:32.367286 systemd[1]: Stopping kubelet.service... Dec 13 14:21:32.378210 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 14:21:32.378505 systemd[1]: Stopped kubelet.service. Dec 13 14:21:32.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:32.379213 kernel: kauditd_printk_skb: 47 callbacks suppressed Dec 13 14:21:32.379255 kernel: audit: type=1131 audit(1734099692.377:228): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:32.383442 systemd[1]: Starting kubelet.service... Dec 13 14:21:32.462096 systemd[1]: Started kubelet.service. Dec 13 14:21:32.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:32.466664 kernel: audit: type=1130 audit(1734099692.461:229): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:32.514390 kubelet[2223]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:21:32.514713 kubelet[2223]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 14:21:32.514759 kubelet[2223]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 14:21:32.514917 kubelet[2223]: I1213 14:21:32.514882 2223 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 14:21:32.519138 kubelet[2223]: I1213 14:21:32.519107 2223 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 14:21:32.519138 kubelet[2223]: I1213 14:21:32.519133 2223 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 14:21:32.519301 kubelet[2223]: I1213 14:21:32.519285 2223 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 14:21:32.520646 kubelet[2223]: I1213 14:21:32.520622 2223 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 14:21:32.523336 kubelet[2223]: I1213 14:21:32.523307 2223 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 14:21:32.527678 kubelet[2223]: I1213 14:21:32.527654 2223 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 14:21:32.528050 kubelet[2223]: I1213 14:21:32.528035 2223 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 14:21:32.528212 kubelet[2223]: I1213 14:21:32.528197 2223 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 14:21:32.528301 kubelet[2223]: I1213 14:21:32.528219 2223 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 14:21:32.528301 kubelet[2223]: I1213 14:21:32.528229 2223 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 14:21:32.528301 kubelet[2223]: I1213 14:21:32.528258 2223 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:21:32.528398 kubelet[2223]: I1213 14:21:32.528340 2223 kubelet.go:396] "Attempting to sync node with API server" Dec 13 14:21:32.528398 kubelet[2223]: I1213 14:21:32.528353 2223 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 14:21:32.528398 kubelet[2223]: I1213 14:21:32.528373 2223 kubelet.go:312] "Adding apiserver pod source" Dec 13 14:21:32.528398 kubelet[2223]: I1213 14:21:32.528383 2223 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 14:21:32.530399 kubelet[2223]: I1213 14:21:32.530376 2223 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 14:21:32.530690 kubelet[2223]: I1213 14:21:32.530672 2223 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 14:21:32.533222 kubelet[2223]: I1213 14:21:32.533205 2223 server.go:1256] "Started kubelet" Dec 13 14:21:32.534494 kubelet[2223]: I1213 14:21:32.533482 2223 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 14:21:32.535030 kubelet[2223]: I1213 14:21:32.533576 2223 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 14:21:32.535030 kubelet[2223]: I1213 14:21:32.535012 2223 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 14:21:32.536286 kubelet[2223]: E1213 14:21:32.536263 2223 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 14:21:32.536868 kubelet[2223]: I1213 14:21:32.536831 2223 kubelet.go:1417] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Dec 13 14:21:32.537355 kubelet[2223]: I1213 14:21:32.537337 2223 kubelet.go:1421] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Dec 13 14:21:32.537494 kubelet[2223]: I1213 14:21:32.537444 2223 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 14:21:32.538006 kubelet[2223]: I1213 14:21:32.537983 2223 server.go:461] "Adding debug handlers to kubelet server" Dec 13 14:21:32.535000 audit[2223]: AVC avc: denied { mac_admin } for pid=2223 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:21:32.541060 kubelet[2223]: I1213 14:21:32.541039 2223 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 14:21:32.535000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:21:32.542170 kubelet[2223]: I1213 14:21:32.542142 2223 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 14:21:32.542496 kubelet[2223]: I1213 14:21:32.542480 2223 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 14:21:32.543236 kernel: audit: type=1400 audit(1734099692.535:230): avc: denied { mac_admin } for pid=2223 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:21:32.551907 kernel: audit: type=1401 audit(1734099692.535:230): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:21:32.552890 kernel: audit: type=1300 audit(1734099692.535:230): arch=c00000b7 syscall=5 success=no exit=-22 a0=40004b7cb0 a1=4000047cf8 a2=40004b7c80 a3=25 items=0 ppid=1 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:32.535000 audit[2223]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40004b7cb0 a1=4000047cf8 a2=40004b7c80 a3=25 items=0 ppid=1 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:32.553132 kubelet[2223]: I1213 14:21:32.553110 2223 factory.go:221] Registration of the systemd container factory successfully Dec 13 14:21:32.553564 kubelet[2223]: I1213 14:21:32.553540 2223 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 14:21:32.535000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:21:32.559793 kernel: audit: type=1327 audit(1734099692.535:230): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:21:32.564775 kernel: audit: type=1400 audit(1734099692.536:231): avc: denied { mac_admin } for pid=2223 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:21:32.536000 audit[2223]: AVC avc: denied { mac_admin } for pid=2223 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:21:32.536000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:21:32.566609 kernel: audit: type=1401 audit(1734099692.536:231): op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:21:32.566672 kernel: audit: type=1300 audit(1734099692.536:231): arch=c00000b7 syscall=5 success=no exit=-22 a0=400010f5e0 a1=4000047d10 a2=40004b7d40 a3=25 items=0 ppid=1 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:32.536000 audit[2223]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400010f5e0 a1=4000047d10 a2=40004b7d40 a3=25 items=0 ppid=1 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:32.567358 kubelet[2223]: I1213 14:21:32.567310 2223 factory.go:221] Registration of the containerd container factory successfully Dec 13 14:21:32.536000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:21:32.571519 kubelet[2223]: I1213 14:21:32.571492 2223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 14:21:32.572297 kubelet[2223]: I1213 14:21:32.572281 2223 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 14:21:32.572333 kubelet[2223]: I1213 14:21:32.572300 2223 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 14:21:32.572333 kubelet[2223]: I1213 14:21:32.572318 2223 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 14:21:32.572391 kubelet[2223]: E1213 14:21:32.572364 2223 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 14:21:32.573274 kernel: audit: type=1327 audit(1734099692.536:231): proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:21:32.609498 kubelet[2223]: I1213 14:21:32.609467 2223 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 14:21:32.609498 kubelet[2223]: I1213 14:21:32.609492 2223 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 14:21:32.609638 kubelet[2223]: I1213 14:21:32.609511 2223 state_mem.go:36] "Initialized new in-memory state store" Dec 13 14:21:32.609666 kubelet[2223]: I1213 14:21:32.609639 2223 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 14:21:32.609666 kubelet[2223]: I1213 14:21:32.609657 2223 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 14:21:32.609666 kubelet[2223]: I1213 14:21:32.609665 2223 policy_none.go:49] "None policy: Start" Dec 13 14:21:32.610274 kubelet[2223]: I1213 14:21:32.610248 2223 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 14:21:32.610274 kubelet[2223]: I1213 14:21:32.610278 2223 state_mem.go:35] "Initializing new in-memory state store" Dec 13 14:21:32.610460 kubelet[2223]: I1213 14:21:32.610445 2223 state_mem.go:75] "Updated machine memory state" Dec 13 14:21:32.611817 kubelet[2223]: I1213 14:21:32.611794 2223 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 14:21:32.610000 audit[2223]: AVC avc: denied { mac_admin } for pid=2223 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:21:32.610000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Dec 13 14:21:32.610000 audit[2223]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40014c1410 a1=4000a13fe0 a2=40014c13e0 a3=25 items=0 ppid=1 pid=2223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/usr/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:32.610000 audit: PROCTITLE proctitle=2F7573722F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Dec 13 14:21:32.612025 kubelet[2223]: I1213 14:21:32.611871 2223 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Dec 13 14:21:32.612176 kubelet[2223]: I1213 14:21:32.612150 2223 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 14:21:32.644226 kubelet[2223]: I1213 14:21:32.644204 2223 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Dec 13 14:21:32.650495 kubelet[2223]: I1213 14:21:32.650464 2223 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Dec 13 14:21:32.650569 kubelet[2223]: I1213 14:21:32.650552 2223 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Dec 13 14:21:32.673426 kubelet[2223]: I1213 14:21:32.673397 2223 topology_manager.go:215] "Topology Admit Handler" podUID="c61de71a704b2d975a657ce01801e9f9" podNamespace="kube-system" podName="kube-apiserver-localhost" Dec 13 14:21:32.673516 kubelet[2223]: I1213 14:21:32.673472 2223 topology_manager.go:215] "Topology Admit Handler" podUID="4f8e0d694c07e04969646aa3c152c34a" podNamespace="kube-system" podName="kube-controller-manager-localhost" Dec 13 14:21:32.673545 kubelet[2223]: I1213 14:21:32.673528 2223 topology_manager.go:215] "Topology Admit Handler" podUID="c4144e8f85b2123a6afada0c1705bbba" podNamespace="kube-system" podName="kube-scheduler-localhost" Dec 13 14:21:32.743796 kubelet[2223]: I1213 14:21:32.743716 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:21:32.743951 kubelet[2223]: I1213 14:21:32.743936 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:21:32.744045 kubelet[2223]: I1213 14:21:32.744033 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c4144e8f85b2123a6afada0c1705bbba-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"c4144e8f85b2123a6afada0c1705bbba\") " pod="kube-system/kube-scheduler-localhost" Dec 13 14:21:32.744132 kubelet[2223]: I1213 14:21:32.744120 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c61de71a704b2d975a657ce01801e9f9-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"c61de71a704b2d975a657ce01801e9f9\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:21:32.744208 kubelet[2223]: I1213 14:21:32.744198 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:21:32.744281 kubelet[2223]: I1213 14:21:32.744270 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:21:32.744360 kubelet[2223]: I1213 14:21:32.744348 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4f8e0d694c07e04969646aa3c152c34a-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"4f8e0d694c07e04969646aa3c152c34a\") " pod="kube-system/kube-controller-manager-localhost" Dec 13 14:21:32.744435 kubelet[2223]: I1213 14:21:32.744424 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c61de71a704b2d975a657ce01801e9f9-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"c61de71a704b2d975a657ce01801e9f9\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:21:32.744519 kubelet[2223]: I1213 14:21:32.744508 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c61de71a704b2d975a657ce01801e9f9-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"c61de71a704b2d975a657ce01801e9f9\") " pod="kube-system/kube-apiserver-localhost" Dec 13 14:21:32.979685 kubelet[2223]: E1213 14:21:32.979649 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:32.980068 kubelet[2223]: E1213 14:21:32.980049 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:32.982940 kubelet[2223]: E1213 14:21:32.982920 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:33.531173 kubelet[2223]: I1213 14:21:33.531131 2223 apiserver.go:52] "Watching apiserver" Dec 13 14:21:33.542970 kubelet[2223]: I1213 14:21:33.542930 2223 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 14:21:33.586616 kubelet[2223]: E1213 14:21:33.586576 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:33.586616 kubelet[2223]: E1213 14:21:33.586615 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:33.602531 kubelet[2223]: E1213 14:21:33.602490 2223 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Dec 13 14:21:33.602994 kubelet[2223]: E1213 14:21:33.602977 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:33.620146 kubelet[2223]: I1213 14:21:33.620108 2223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.6200684939999999 podStartE2EDuration="1.620068494s" podCreationTimestamp="2024-12-13 14:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:21:33.618064069 +0000 UTC m=+1.151005773" watchObservedRunningTime="2024-12-13 14:21:33.620068494 +0000 UTC m=+1.153010158" Dec 13 14:21:33.661431 kubelet[2223]: I1213 14:21:33.661396 2223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.6613576220000001 podStartE2EDuration="1.661357622s" podCreationTimestamp="2024-12-13 14:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:21:33.647611077 +0000 UTC m=+1.180552781" watchObservedRunningTime="2024-12-13 14:21:33.661357622 +0000 UTC m=+1.194299286" Dec 13 14:21:33.671419 kubelet[2223]: I1213 14:21:33.671386 2223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.671349408 podStartE2EDuration="1.671349408s" podCreationTimestamp="2024-12-13 14:21:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:21:33.662609928 +0000 UTC m=+1.195551632" watchObservedRunningTime="2024-12-13 14:21:33.671349408 +0000 UTC m=+1.204291112" Dec 13 14:21:34.587769 kubelet[2223]: E1213 14:21:34.587731 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:36.597734 sudo[1486]: pam_unix(sudo:session): session closed for user root Dec 13 14:21:36.596000 audit[1486]: USER_END pid=1486 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:21:36.597000 audit[1486]: CRED_DISP pid=1486 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Dec 13 14:21:36.599036 sshd[1480]: pam_unix(sshd:session): session closed for user core Dec 13 14:21:36.599000 audit[1480]: USER_END pid=1480 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:21:36.599000 audit[1480]: CRED_DISP pid=1480 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:21:36.601953 systemd[1]: sshd@6-10.0.0.138:22-10.0.0.1:40682.service: Deactivated successfully. Dec 13 14:21:36.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.138:22-10.0.0.1:40682 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:21:36.602910 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 14:21:36.602936 systemd-logind[1303]: Session 7 logged out. Waiting for processes to exit. Dec 13 14:21:36.603931 systemd-logind[1303]: Removed session 7. Dec 13 14:21:38.203969 kubelet[2223]: E1213 14:21:38.203893 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:38.245772 kubelet[2223]: E1213 14:21:38.245724 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:38.592733 kubelet[2223]: E1213 14:21:38.592691 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:38.592852 kubelet[2223]: E1213 14:21:38.592789 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:40.128911 kubelet[2223]: E1213 14:21:40.128881 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:40.594870 kubelet[2223]: E1213 14:21:40.594825 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:46.612256 kubelet[2223]: I1213 14:21:46.612213 2223 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 14:21:46.612612 env[1320]: time="2024-12-13T14:21:46.612579040Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 14:21:46.612937 kubelet[2223]: I1213 14:21:46.612915 2223 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 14:21:46.966361 update_engine[1306]: I1213 14:21:46.965925 1306 update_attempter.cc:509] Updating boot flags... Dec 13 14:21:47.354686 kubelet[2223]: I1213 14:21:47.353332 2223 topology_manager.go:215] "Topology Admit Handler" podUID="76a48135-98c4-4fd0-90d2-886227f2c42c" podNamespace="kube-system" podName="kube-proxy-5d52m" Dec 13 14:21:47.443921 kubelet[2223]: I1213 14:21:47.443872 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76a48135-98c4-4fd0-90d2-886227f2c42c-kube-proxy\") pod \"kube-proxy-5d52m\" (UID: \"76a48135-98c4-4fd0-90d2-886227f2c42c\") " pod="kube-system/kube-proxy-5d52m" Dec 13 14:21:47.443921 kubelet[2223]: I1213 14:21:47.443921 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmbtg\" (UniqueName: \"kubernetes.io/projected/76a48135-98c4-4fd0-90d2-886227f2c42c-kube-api-access-mmbtg\") pod \"kube-proxy-5d52m\" (UID: \"76a48135-98c4-4fd0-90d2-886227f2c42c\") " pod="kube-system/kube-proxy-5d52m" Dec 13 14:21:47.444095 kubelet[2223]: I1213 14:21:47.443945 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76a48135-98c4-4fd0-90d2-886227f2c42c-lib-modules\") pod \"kube-proxy-5d52m\" (UID: \"76a48135-98c4-4fd0-90d2-886227f2c42c\") " pod="kube-system/kube-proxy-5d52m" Dec 13 14:21:47.444095 kubelet[2223]: I1213 14:21:47.443967 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76a48135-98c4-4fd0-90d2-886227f2c42c-xtables-lock\") pod \"kube-proxy-5d52m\" (UID: \"76a48135-98c4-4fd0-90d2-886227f2c42c\") " pod="kube-system/kube-proxy-5d52m" Dec 13 14:21:47.657386 kubelet[2223]: E1213 14:21:47.657282 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:47.658008 env[1320]: time="2024-12-13T14:21:47.657964336Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5d52m,Uid:76a48135-98c4-4fd0-90d2-886227f2c42c,Namespace:kube-system,Attempt:0,}" Dec 13 14:21:47.670613 env[1320]: time="2024-12-13T14:21:47.670555590Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:21:47.670613 env[1320]: time="2024-12-13T14:21:47.670594073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:21:47.670613 env[1320]: time="2024-12-13T14:21:47.670604073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:21:47.670785 env[1320]: time="2024-12-13T14:21:47.670749082Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9387a6ea09add8ebd3aab73d322758f2d41e008fb2396690a2053c4c058f3e5c pid=2332 runtime=io.containerd.runc.v2 Dec 13 14:21:47.720767 env[1320]: time="2024-12-13T14:21:47.720729197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5d52m,Uid:76a48135-98c4-4fd0-90d2-886227f2c42c,Namespace:kube-system,Attempt:0,} returns sandbox id \"9387a6ea09add8ebd3aab73d322758f2d41e008fb2396690a2053c4c058f3e5c\"" Dec 13 14:21:47.721672 kubelet[2223]: E1213 14:21:47.721649 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:47.724577 env[1320]: time="2024-12-13T14:21:47.724543979Z" level=info msg="CreateContainer within sandbox \"9387a6ea09add8ebd3aab73d322758f2d41e008fb2396690a2053c4c058f3e5c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 14:21:47.734897 env[1320]: time="2024-12-13T14:21:47.734423995Z" level=info msg="CreateContainer within sandbox \"9387a6ea09add8ebd3aab73d322758f2d41e008fb2396690a2053c4c058f3e5c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"60fde68f70010a7cc2b37db23fbcd52ffa76aebee3aa8c1ef5e2e0aed46ace29\"" Dec 13 14:21:47.735387 env[1320]: time="2024-12-13T14:21:47.735349129Z" level=info msg="StartContainer for \"60fde68f70010a7cc2b37db23fbcd52ffa76aebee3aa8c1ef5e2e0aed46ace29\"" Dec 13 14:21:47.763546 kubelet[2223]: I1213 14:21:47.763206 2223 topology_manager.go:215] "Topology Admit Handler" podUID="0188f741-c564-4975-8081-59698640533e" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-dj5dj" Dec 13 14:21:47.816931 env[1320]: time="2024-12-13T14:21:47.816878724Z" level=info msg="StartContainer for \"60fde68f70010a7cc2b37db23fbcd52ffa76aebee3aa8c1ef5e2e0aed46ace29\" returns successfully" Dec 13 14:21:47.845793 kubelet[2223]: I1213 14:21:47.845758 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/0188f741-c564-4975-8081-59698640533e-var-lib-calico\") pod \"tigera-operator-c7ccbd65-dj5dj\" (UID: \"0188f741-c564-4975-8081-59698640533e\") " pod="tigera-operator/tigera-operator-c7ccbd65-dj5dj" Dec 13 14:21:47.845793 kubelet[2223]: I1213 14:21:47.845798 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmlrc\" (UniqueName: \"kubernetes.io/projected/0188f741-c564-4975-8081-59698640533e-kube-api-access-qmlrc\") pod \"tigera-operator-c7ccbd65-dj5dj\" (UID: \"0188f741-c564-4975-8081-59698640533e\") " pod="tigera-operator/tigera-operator-c7ccbd65-dj5dj" Dec 13 14:21:47.934871 kernel: kauditd_printk_skb: 9 callbacks suppressed Dec 13 14:21:47.934962 kernel: audit: type=1325 audit(1734099707.930:238): table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:47.934980 kernel: audit: type=1300 audit(1734099707.930:238): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff8c95ec0 a2=0 a3=1 items=0 ppid=2387 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:47.930000 audit[2429]: NETFILTER_CFG table=mangle:38 family=2 entries=1 op=nft_register_chain pid=2429 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:47.930000 audit[2429]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff8c95ec0 a2=0 a3=1 items=0 ppid=2387 pid=2429 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:47.930000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:21:47.939538 kernel: audit: type=1327 audit(1734099707.930:238): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:21:47.930000 audit[2430]: NETFILTER_CFG table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:47.941572 kernel: audit: type=1325 audit(1734099707.930:239): table=mangle:39 family=10 entries=1 op=nft_register_chain pid=2430 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:47.930000 audit[2430]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff019eb20 a2=0 a3=1 items=0 ppid=2387 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:47.945331 kernel: audit: type=1300 audit(1734099707.930:239): arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff019eb20 a2=0 a3=1 items=0 ppid=2387 pid=2430 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:47.930000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:21:47.947262 kernel: audit: type=1327 audit(1734099707.930:239): proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Dec 13 14:21:47.947306 kernel: audit: type=1325 audit(1734099707.931:240): table=nat:40 family=2 entries=1 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:47.931000 audit[2431]: NETFILTER_CFG table=nat:40 family=2 entries=1 op=nft_register_chain pid=2431 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:47.931000 audit[2431]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7c1b560 a2=0 a3=1 items=0 ppid=2387 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:47.952965 kernel: audit: type=1300 audit(1734099707.931:240): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff7c1b560 a2=0 a3=1 items=0 ppid=2387 pid=2431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:47.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:21:47.954906 kernel: audit: type=1327 audit(1734099707.931:240): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:21:47.954956 kernel: audit: type=1325 audit(1734099707.931:241): table=filter:41 family=2 entries=1 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:47.931000 audit[2432]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=2432 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:47.931000 audit[2432]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffffcc6760 a2=0 a3=1 items=0 ppid=2387 pid=2432 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:47.931000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:21:47.939000 audit[2433]: NETFILTER_CFG table=nat:42 family=10 entries=1 op=nft_register_chain pid=2433 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:47.939000 audit[2433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2f45c80 a2=0 a3=1 items=0 ppid=2387 pid=2433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:47.939000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Dec 13 14:21:47.941000 audit[2434]: NETFILTER_CFG table=filter:43 family=10 entries=1 op=nft_register_chain pid=2434 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:47.941000 audit[2434]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffac8f0e0 a2=0 a3=1 items=0 ppid=2387 pid=2434 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:47.941000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Dec 13 14:21:48.034000 audit[2436]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=2436 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.034000 audit[2436]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffcb8f6d70 a2=0 a3=1 items=0 ppid=2387 pid=2436 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.034000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:21:48.038000 audit[2438]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=2438 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.038000 audit[2438]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffeaa71270 a2=0 a3=1 items=0 ppid=2387 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.038000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Dec 13 14:21:48.042000 audit[2441]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2441 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.042000 audit[2441]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffed9e9b0 a2=0 a3=1 items=0 ppid=2387 pid=2441 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.042000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Dec 13 14:21:48.043000 audit[2442]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_chain pid=2442 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.043000 audit[2442]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc19c480 a2=0 a3=1 items=0 ppid=2387 pid=2442 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.043000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:21:48.047000 audit[2444]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=2444 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.047000 audit[2444]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcfa7a5c0 a2=0 a3=1 items=0 ppid=2387 pid=2444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.047000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:21:48.048000 audit[2445]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=2445 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.048000 audit[2445]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffe437300 a2=0 a3=1 items=0 ppid=2387 pid=2445 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.048000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:21:48.051000 audit[2447]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=2447 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.051000 audit[2447]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd673a340 a2=0 a3=1 items=0 ppid=2387 pid=2447 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.051000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:21:48.054000 audit[2450]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_rule pid=2450 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.054000 audit[2450]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcc97bec0 a2=0 a3=1 items=0 ppid=2387 pid=2450 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.054000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Dec 13 14:21:48.055000 audit[2451]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_chain pid=2451 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.055000 audit[2451]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe81a7060 a2=0 a3=1 items=0 ppid=2387 pid=2451 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.055000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:21:48.057000 audit[2453]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=2453 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.057000 audit[2453]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffce54d390 a2=0 a3=1 items=0 ppid=2387 pid=2453 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.057000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:21:48.058000 audit[2454]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.058000 audit[2454]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe08ea020 a2=0 a3=1 items=0 ppid=2387 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.058000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:21:48.060000 audit[2456]: NETFILTER_CFG table=filter:55 family=2 entries=1 op=nft_register_rule pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.060000 audit[2456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff9a4dd90 a2=0 a3=1 items=0 ppid=2387 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.060000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:21:48.063000 audit[2459]: NETFILTER_CFG table=filter:56 family=2 entries=1 op=nft_register_rule pid=2459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.063000 audit[2459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd6520e70 a2=0 a3=1 items=0 ppid=2387 pid=2459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.063000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:21:48.066000 audit[2462]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_rule pid=2462 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.066000 audit[2462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcaf35e90 a2=0 a3=1 items=0 ppid=2387 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.066000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:21:48.067593 env[1320]: time="2024-12-13T14:21:48.067552434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-dj5dj,Uid:0188f741-c564-4975-8081-59698640533e,Namespace:tigera-operator,Attempt:0,}" Dec 13 14:21:48.067000 audit[2463]: NETFILTER_CFG table=nat:58 family=2 entries=1 op=nft_register_chain pid=2463 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.067000 audit[2463]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff421e5c0 a2=0 a3=1 items=0 ppid=2387 pid=2463 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.067000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:21:48.070000 audit[2465]: NETFILTER_CFG table=nat:59 family=2 entries=1 op=nft_register_rule pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.070000 audit[2465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffff06e37a0 a2=0 a3=1 items=0 ppid=2387 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.070000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:21:48.075000 audit[2472]: NETFILTER_CFG table=nat:60 family=2 entries=1 op=nft_register_rule pid=2472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.075000 audit[2472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcc655c80 a2=0 a3=1 items=0 ppid=2387 pid=2472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.075000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:21:48.077000 audit[2476]: NETFILTER_CFG table=nat:61 family=2 entries=1 op=nft_register_chain pid=2476 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.077000 audit[2476]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff1a99710 a2=0 a3=1 items=0 ppid=2387 pid=2476 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.077000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:21:48.079000 audit[2484]: NETFILTER_CFG table=nat:62 family=2 entries=1 op=nft_register_rule pid=2484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Dec 13 14:21:48.079000 audit[2484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe8177700 a2=0 a3=1 items=0 ppid=2387 pid=2484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.079000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:21:48.080670 env[1320]: time="2024-12-13T14:21:48.080571757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:21:48.080670 env[1320]: time="2024-12-13T14:21:48.080622239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:21:48.080670 env[1320]: time="2024-12-13T14:21:48.080633280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:21:48.080825 env[1320]: time="2024-12-13T14:21:48.080783008Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3b7bce9abb42aafbf38e3841d21db3dc0562a9183d1c3e11d4ea3c54f87c7dd9 pid=2477 runtime=io.containerd.runc.v2 Dec 13 14:21:48.104000 audit[2501]: NETFILTER_CFG table=filter:63 family=2 entries=8 op=nft_register_rule pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:21:48.104000 audit[2501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5164 a0=3 a1=ffffc5777670 a2=0 a3=1 items=0 ppid=2387 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.104000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:21:48.118000 audit[2501]: NETFILTER_CFG table=nat:64 family=2 entries=14 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:21:48.118000 audit[2501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffc5777670 a2=0 a3=1 items=0 ppid=2387 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.118000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:21:48.119000 audit[2520]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=2520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.119000 audit[2520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc53115d0 a2=0 a3=1 items=0 ppid=2387 pid=2520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.119000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Dec 13 14:21:48.122000 audit[2523]: NETFILTER_CFG table=filter:66 family=10 entries=2 op=nft_register_chain pid=2523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.122000 audit[2523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffffcd5d40 a2=0 a3=1 items=0 ppid=2387 pid=2523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.122000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Dec 13 14:21:48.124207 env[1320]: time="2024-12-13T14:21:48.124153735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-dj5dj,Uid:0188f741-c564-4975-8081-59698640533e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"3b7bce9abb42aafbf38e3841d21db3dc0562a9183d1c3e11d4ea3c54f87c7dd9\"" Dec 13 14:21:48.127089 env[1320]: time="2024-12-13T14:21:48.126980732Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 14:21:48.127000 audit[2526]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=2526 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.127000 audit[2526]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcfb042a0 a2=0 a3=1 items=0 ppid=2387 pid=2526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.127000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Dec 13 14:21:48.128000 audit[2527]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=2527 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.128000 audit[2527]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5991df0 a2=0 a3=1 items=0 ppid=2387 pid=2527 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.128000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Dec 13 14:21:48.130000 audit[2529]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=2529 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.130000 audit[2529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffc6a0bb0 a2=0 a3=1 items=0 ppid=2387 pid=2529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.130000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Dec 13 14:21:48.131000 audit[2530]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=2530 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.131000 audit[2530]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe45339d0 a2=0 a3=1 items=0 ppid=2387 pid=2530 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.131000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Dec 13 14:21:48.134000 audit[2532]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=2532 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.134000 audit[2532]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe07b8430 a2=0 a3=1 items=0 ppid=2387 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.134000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Dec 13 14:21:48.137000 audit[2535]: NETFILTER_CFG table=filter:72 family=10 entries=2 op=nft_register_chain pid=2535 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.137000 audit[2535]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffea4c7890 a2=0 a3=1 items=0 ppid=2387 pid=2535 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.137000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Dec 13 14:21:48.138000 audit[2536]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_chain pid=2536 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.138000 audit[2536]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc9516ae0 a2=0 a3=1 items=0 ppid=2387 pid=2536 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.138000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Dec 13 14:21:48.140000 audit[2538]: NETFILTER_CFG table=filter:74 family=10 entries=1 op=nft_register_rule pid=2538 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.140000 audit[2538]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff29e2120 a2=0 a3=1 items=0 ppid=2387 pid=2538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.140000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Dec 13 14:21:48.141000 audit[2539]: NETFILTER_CFG table=filter:75 family=10 entries=1 op=nft_register_chain pid=2539 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.141000 audit[2539]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe828f1c0 a2=0 a3=1 items=0 ppid=2387 pid=2539 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.141000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Dec 13 14:21:48.144000 audit[2541]: NETFILTER_CFG table=filter:76 family=10 entries=1 op=nft_register_rule pid=2541 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.144000 audit[2541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffebf18d10 a2=0 a3=1 items=0 ppid=2387 pid=2541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Dec 13 14:21:48.147000 audit[2544]: NETFILTER_CFG table=filter:77 family=10 entries=1 op=nft_register_rule pid=2544 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.147000 audit[2544]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcf74ed10 a2=0 a3=1 items=0 ppid=2387 pid=2544 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.147000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Dec 13 14:21:48.150000 audit[2547]: NETFILTER_CFG table=filter:78 family=10 entries=1 op=nft_register_rule pid=2547 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.150000 audit[2547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffebfc7ce0 a2=0 a3=1 items=0 ppid=2387 pid=2547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.150000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Dec 13 14:21:48.151000 audit[2548]: NETFILTER_CFG table=nat:79 family=10 entries=1 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.151000 audit[2548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd6261590 a2=0 a3=1 items=0 ppid=2387 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.151000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Dec 13 14:21:48.153000 audit[2550]: NETFILTER_CFG table=nat:80 family=10 entries=2 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.153000 audit[2550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffd2c74dd0 a2=0 a3=1 items=0 ppid=2387 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.153000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:21:48.156000 audit[2553]: NETFILTER_CFG table=nat:81 family=10 entries=2 op=nft_register_chain pid=2553 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.156000 audit[2553]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe4224330 a2=0 a3=1 items=0 ppid=2387 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.156000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Dec 13 14:21:48.157000 audit[2554]: NETFILTER_CFG table=nat:82 family=10 entries=1 op=nft_register_chain pid=2554 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.157000 audit[2554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff47140e0 a2=0 a3=1 items=0 ppid=2387 pid=2554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.157000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Dec 13 14:21:48.159000 audit[2556]: NETFILTER_CFG table=nat:83 family=10 entries=2 op=nft_register_chain pid=2556 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.159000 audit[2556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff3c49ae0 a2=0 a3=1 items=0 ppid=2387 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.159000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Dec 13 14:21:48.160000 audit[2557]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.160000 audit[2557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc11716b0 a2=0 a3=1 items=0 ppid=2387 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.160000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Dec 13 14:21:48.162000 audit[2559]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=2559 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.162000 audit[2559]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffb0ae4b0 a2=0 a3=1 items=0 ppid=2387 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.162000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:21:48.165000 audit[2562]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_rule pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Dec 13 14:21:48.165000 audit[2562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc49e5af0 a2=0 a3=1 items=0 ppid=2387 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.165000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Dec 13 14:21:48.167000 audit[2564]: NETFILTER_CFG table=filter:87 family=10 entries=3 op=nft_register_rule pid=2564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:21:48.167000 audit[2564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2004 a0=3 a1=ffffe1695d20 a2=0 a3=1 items=0 ppid=2387 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.167000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:21:48.168000 audit[2564]: NETFILTER_CFG table=nat:88 family=10 entries=7 op=nft_register_chain pid=2564 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Dec 13 14:21:48.168000 audit[2564]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe1695d20 a2=0 a3=1 items=0 ppid=2387 pid=2564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:48.168000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:21:48.609509 kubelet[2223]: E1213 14:21:48.609444 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:48.617167 kubelet[2223]: I1213 14:21:48.616046 2223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-5d52m" podStartSLOduration=1.616007309 podStartE2EDuration="1.616007309s" podCreationTimestamp="2024-12-13 14:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:21:48.615893343 +0000 UTC m=+16.148835047" watchObservedRunningTime="2024-12-13 14:21:48.616007309 +0000 UTC m=+16.148948973" Dec 13 14:21:49.646174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2516119727.mount: Deactivated successfully. Dec 13 14:21:50.257448 env[1320]: time="2024-12-13T14:21:50.257405385Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:50.260553 env[1320]: time="2024-12-13T14:21:50.260493260Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:50.263144 env[1320]: time="2024-12-13T14:21:50.263106752Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/tigera/operator:v1.36.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:50.264634 env[1320]: time="2024-12-13T14:21:50.264607668Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:50.265079 env[1320]: time="2024-12-13T14:21:50.265053970Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:30d521e4e84764b396aacbb2a373ca7a573f84571e3955b34329652acccfb73c\"" Dec 13 14:21:50.267526 env[1320]: time="2024-12-13T14:21:50.267482132Z" level=info msg="CreateContainer within sandbox \"3b7bce9abb42aafbf38e3841d21db3dc0562a9183d1c3e11d4ea3c54f87c7dd9\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 14:21:50.276359 env[1320]: time="2024-12-13T14:21:50.276327978Z" level=info msg="CreateContainer within sandbox \"3b7bce9abb42aafbf38e3841d21db3dc0562a9183d1c3e11d4ea3c54f87c7dd9\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ebad9324a612c875de6a727903e6cef8efa623c52c67b570f35c256a9e083bf1\"" Dec 13 14:21:50.276742 env[1320]: time="2024-12-13T14:21:50.276717477Z" level=info msg="StartContainer for \"ebad9324a612c875de6a727903e6cef8efa623c52c67b570f35c256a9e083bf1\"" Dec 13 14:21:50.330267 env[1320]: time="2024-12-13T14:21:50.330217931Z" level=info msg="StartContainer for \"ebad9324a612c875de6a727903e6cef8efa623c52c67b570f35c256a9e083bf1\" returns successfully" Dec 13 14:21:52.584629 kubelet[2223]: I1213 14:21:52.584582 2223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-dj5dj" podStartSLOduration=3.444899724 podStartE2EDuration="5.584546048s" podCreationTimestamp="2024-12-13 14:21:47 +0000 UTC" firstStartedPulling="2024-12-13 14:21:48.12568394 +0000 UTC m=+15.658625644" lastFinishedPulling="2024-12-13 14:21:50.265330264 +0000 UTC m=+17.798271968" observedRunningTime="2024-12-13 14:21:50.62171617 +0000 UTC m=+18.154657834" watchObservedRunningTime="2024-12-13 14:21:52.584546048 +0000 UTC m=+20.117487752" Dec 13 14:21:53.596000 audit[2604]: NETFILTER_CFG table=filter:89 family=2 entries=15 op=nft_register_rule pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:21:53.598118 kernel: kauditd_printk_skb: 143 callbacks suppressed Dec 13 14:21:53.598186 kernel: audit: type=1325 audit(1734099713.596:289): table=filter:89 family=2 entries=15 op=nft_register_rule pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:21:53.596000 audit[2604]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffdc2fa890 a2=0 a3=1 items=0 ppid=2387 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:53.603517 kernel: audit: type=1300 audit(1734099713.596:289): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffdc2fa890 a2=0 a3=1 items=0 ppid=2387 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:53.603577 kernel: audit: type=1327 audit(1734099713.596:289): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:21:53.596000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:21:53.610000 audit[2604]: NETFILTER_CFG table=nat:90 family=2 entries=12 op=nft_register_rule pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:21:53.610000 audit[2604]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdc2fa890 a2=0 a3=1 items=0 ppid=2387 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:53.616696 kernel: audit: type=1325 audit(1734099713.610:290): table=nat:90 family=2 entries=12 op=nft_register_rule pid=2604 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:21:53.616760 kernel: audit: type=1300 audit(1734099713.610:290): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdc2fa890 a2=0 a3=1 items=0 ppid=2387 pid=2604 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:53.616784 kernel: audit: type=1327 audit(1734099713.610:290): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:21:53.610000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:21:53.628000 audit[2606]: NETFILTER_CFG table=filter:91 family=2 entries=16 op=nft_register_rule pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:21:53.628000 audit[2606]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffcc61be90 a2=0 a3=1 items=0 ppid=2387 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:53.634503 kernel: audit: type=1325 audit(1734099713.628:291): table=filter:91 family=2 entries=16 op=nft_register_rule pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:21:53.634568 kernel: audit: type=1300 audit(1734099713.628:291): arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffcc61be90 a2=0 a3=1 items=0 ppid=2387 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:53.634595 kernel: audit: type=1327 audit(1734099713.628:291): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:21:53.628000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:21:53.637000 audit[2606]: NETFILTER_CFG table=nat:92 family=2 entries=12 op=nft_register_rule pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:21:53.637000 audit[2606]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcc61be90 a2=0 a3=1 items=0 ppid=2387 pid=2606 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:53.637000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:21:53.640874 kernel: audit: type=1325 audit(1734099713.637:292): table=nat:92 family=2 entries=12 op=nft_register_rule pid=2606 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:21:53.720092 kubelet[2223]: I1213 14:21:53.720045 2223 topology_manager.go:215] "Topology Admit Handler" podUID="7e02a5ec-a765-44d6-a382-0b80b7443362" podNamespace="calico-system" podName="calico-typha-74cc58f5d8-jsqpm" Dec 13 14:21:53.768752 kubelet[2223]: I1213 14:21:53.768710 2223 topology_manager.go:215] "Topology Admit Handler" podUID="ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0" podNamespace="calico-system" podName="calico-node-zlvk7" Dec 13 14:21:53.787094 kubelet[2223]: I1213 14:21:53.787049 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0-var-run-calico\") pod \"calico-node-zlvk7\" (UID: \"ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0\") " pod="calico-system/calico-node-zlvk7" Dec 13 14:21:53.787094 kubelet[2223]: I1213 14:21:53.787107 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7e02a5ec-a765-44d6-a382-0b80b7443362-tigera-ca-bundle\") pod \"calico-typha-74cc58f5d8-jsqpm\" (UID: \"7e02a5ec-a765-44d6-a382-0b80b7443362\") " pod="calico-system/calico-typha-74cc58f5d8-jsqpm" Dec 13 14:21:53.787259 kubelet[2223]: I1213 14:21:53.787129 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0-cni-bin-dir\") pod \"calico-node-zlvk7\" (UID: \"ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0\") " pod="calico-system/calico-node-zlvk7" Dec 13 14:21:53.787259 kubelet[2223]: I1213 14:21:53.787160 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0-cni-log-dir\") pod \"calico-node-zlvk7\" (UID: \"ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0\") " pod="calico-system/calico-node-zlvk7" Dec 13 14:21:53.787259 kubelet[2223]: I1213 14:21:53.787181 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0-flexvol-driver-host\") pod \"calico-node-zlvk7\" (UID: \"ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0\") " pod="calico-system/calico-node-zlvk7" Dec 13 14:21:53.787259 kubelet[2223]: I1213 14:21:53.787202 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0-policysync\") pod \"calico-node-zlvk7\" (UID: \"ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0\") " pod="calico-system/calico-node-zlvk7" Dec 13 14:21:53.787259 kubelet[2223]: I1213 14:21:53.787225 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0-xtables-lock\") pod \"calico-node-zlvk7\" (UID: \"ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0\") " pod="calico-system/calico-node-zlvk7" Dec 13 14:21:53.787388 kubelet[2223]: I1213 14:21:53.787256 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dvg8\" (UniqueName: \"kubernetes.io/projected/ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0-kube-api-access-6dvg8\") pod \"calico-node-zlvk7\" (UID: \"ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0\") " pod="calico-system/calico-node-zlvk7" Dec 13 14:21:53.787388 kubelet[2223]: I1213 14:21:53.787287 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/7e02a5ec-a765-44d6-a382-0b80b7443362-typha-certs\") pod \"calico-typha-74cc58f5d8-jsqpm\" (UID: \"7e02a5ec-a765-44d6-a382-0b80b7443362\") " pod="calico-system/calico-typha-74cc58f5d8-jsqpm" Dec 13 14:21:53.787388 kubelet[2223]: I1213 14:21:53.787318 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s6w2\" (UniqueName: \"kubernetes.io/projected/7e02a5ec-a765-44d6-a382-0b80b7443362-kube-api-access-5s6w2\") pod \"calico-typha-74cc58f5d8-jsqpm\" (UID: \"7e02a5ec-a765-44d6-a382-0b80b7443362\") " pod="calico-system/calico-typha-74cc58f5d8-jsqpm" Dec 13 14:21:53.787388 kubelet[2223]: I1213 14:21:53.787339 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0-cni-net-dir\") pod \"calico-node-zlvk7\" (UID: \"ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0\") " pod="calico-system/calico-node-zlvk7" Dec 13 14:21:53.787388 kubelet[2223]: I1213 14:21:53.787360 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0-tigera-ca-bundle\") pod \"calico-node-zlvk7\" (UID: \"ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0\") " pod="calico-system/calico-node-zlvk7" Dec 13 14:21:53.787503 kubelet[2223]: I1213 14:21:53.787380 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0-node-certs\") pod \"calico-node-zlvk7\" (UID: \"ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0\") " pod="calico-system/calico-node-zlvk7" Dec 13 14:21:53.787503 kubelet[2223]: I1213 14:21:53.787411 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0-lib-modules\") pod \"calico-node-zlvk7\" (UID: \"ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0\") " pod="calico-system/calico-node-zlvk7" Dec 13 14:21:53.787503 kubelet[2223]: I1213 14:21:53.787446 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0-var-lib-calico\") pod \"calico-node-zlvk7\" (UID: \"ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0\") " pod="calico-system/calico-node-zlvk7" Dec 13 14:21:53.880963 kubelet[2223]: I1213 14:21:53.880867 2223 topology_manager.go:215] "Topology Admit Handler" podUID="edcf1038-965b-4103-930d-3cbf62798dd0" podNamespace="calico-system" podName="csi-node-driver-8gfp4" Dec 13 14:21:53.881370 kubelet[2223]: E1213 14:21:53.881344 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gfp4" podUID="edcf1038-965b-4103-930d-3cbf62798dd0" Dec 13 14:21:53.893385 kubelet[2223]: E1213 14:21:53.891140 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.893385 kubelet[2223]: W1213 14:21:53.891165 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.893385 kubelet[2223]: E1213 14:21:53.891196 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.898974 kubelet[2223]: E1213 14:21:53.898948 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.899090 kubelet[2223]: W1213 14:21:53.899074 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.899873 kubelet[2223]: E1213 14:21:53.899293 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.900133 kubelet[2223]: E1213 14:21:53.900111 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.900260 kubelet[2223]: W1213 14:21:53.900246 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.900409 kubelet[2223]: E1213 14:21:53.900385 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.900584 kubelet[2223]: E1213 14:21:53.900571 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.900678 kubelet[2223]: W1213 14:21:53.900664 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.900793 kubelet[2223]: E1213 14:21:53.900778 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.901011 kubelet[2223]: E1213 14:21:53.900996 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.901093 kubelet[2223]: W1213 14:21:53.901081 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.901202 kubelet[2223]: E1213 14:21:53.901188 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.901373 kubelet[2223]: E1213 14:21:53.901360 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.901466 kubelet[2223]: W1213 14:21:53.901454 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.901579 kubelet[2223]: E1213 14:21:53.901564 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.901981 kubelet[2223]: E1213 14:21:53.901962 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.902077 kubelet[2223]: W1213 14:21:53.902064 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.902327 kubelet[2223]: E1213 14:21:53.902301 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.902454 kubelet[2223]: E1213 14:21:53.902439 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.902542 kubelet[2223]: W1213 14:21:53.902529 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.902690 kubelet[2223]: E1213 14:21:53.902668 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.902817 kubelet[2223]: E1213 14:21:53.902804 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.902902 kubelet[2223]: W1213 14:21:53.902891 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.903043 kubelet[2223]: E1213 14:21:53.903016 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.903157 kubelet[2223]: E1213 14:21:53.903145 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.903228 kubelet[2223]: W1213 14:21:53.903216 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.903374 kubelet[2223]: E1213 14:21:53.903352 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.903701 kubelet[2223]: E1213 14:21:53.903675 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.903806 kubelet[2223]: W1213 14:21:53.903793 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.903945 kubelet[2223]: E1213 14:21:53.903927 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.904144 kubelet[2223]: E1213 14:21:53.904130 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.904222 kubelet[2223]: W1213 14:21:53.904209 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.904329 kubelet[2223]: E1213 14:21:53.904314 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.904550 kubelet[2223]: E1213 14:21:53.904533 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.904624 kubelet[2223]: W1213 14:21:53.904611 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.904712 kubelet[2223]: E1213 14:21:53.904701 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.907462 kubelet[2223]: E1213 14:21:53.907445 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.907546 kubelet[2223]: W1213 14:21:53.907532 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.907711 kubelet[2223]: E1213 14:21:53.907662 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.907996 kubelet[2223]: E1213 14:21:53.907981 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.908163 kubelet[2223]: W1213 14:21:53.908139 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.908292 kubelet[2223]: E1213 14:21:53.908263 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.908538 kubelet[2223]: E1213 14:21:53.908524 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.908680 kubelet[2223]: W1213 14:21:53.908636 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.908806 kubelet[2223]: E1213 14:21:53.908784 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.909299 kubelet[2223]: E1213 14:21:53.909231 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.909299 kubelet[2223]: W1213 14:21:53.909298 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.909432 kubelet[2223]: E1213 14:21:53.909412 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.914512 kubelet[2223]: E1213 14:21:53.914484 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.914512 kubelet[2223]: W1213 14:21:53.914506 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.914629 kubelet[2223]: E1213 14:21:53.914579 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.914749 kubelet[2223]: E1213 14:21:53.914731 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.914749 kubelet[2223]: W1213 14:21:53.914750 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.914814 kubelet[2223]: E1213 14:21:53.914789 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.915126 kubelet[2223]: E1213 14:21:53.915111 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.915192 kubelet[2223]: W1213 14:21:53.915179 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.915339 kubelet[2223]: E1213 14:21:53.915314 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.915469 kubelet[2223]: E1213 14:21:53.915456 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.915537 kubelet[2223]: W1213 14:21:53.915518 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.915596 kubelet[2223]: E1213 14:21:53.915585 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.932110 kubelet[2223]: E1213 14:21:53.930795 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.932110 kubelet[2223]: W1213 14:21:53.930813 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.932110 kubelet[2223]: E1213 14:21:53.930832 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.972926 kubelet[2223]: E1213 14:21:53.972889 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.972926 kubelet[2223]: W1213 14:21:53.972913 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.972926 kubelet[2223]: E1213 14:21:53.972933 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.974140 kubelet[2223]: E1213 14:21:53.974115 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.974140 kubelet[2223]: W1213 14:21:53.974134 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.974226 kubelet[2223]: E1213 14:21:53.974149 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.974403 kubelet[2223]: E1213 14:21:53.974378 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.974403 kubelet[2223]: W1213 14:21:53.974391 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.974403 kubelet[2223]: E1213 14:21:53.974403 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.974625 kubelet[2223]: E1213 14:21:53.974605 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.974625 kubelet[2223]: W1213 14:21:53.974620 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.974710 kubelet[2223]: E1213 14:21:53.974633 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.974828 kubelet[2223]: E1213 14:21:53.974808 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.974828 kubelet[2223]: W1213 14:21:53.974822 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.974911 kubelet[2223]: E1213 14:21:53.974833 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.975007 kubelet[2223]: E1213 14:21:53.974989 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.975007 kubelet[2223]: W1213 14:21:53.975001 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.975070 kubelet[2223]: E1213 14:21:53.975012 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.975153 kubelet[2223]: E1213 14:21:53.975136 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.975153 kubelet[2223]: W1213 14:21:53.975147 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.975153 kubelet[2223]: E1213 14:21:53.975156 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.975296 kubelet[2223]: E1213 14:21:53.975280 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.975296 kubelet[2223]: W1213 14:21:53.975290 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.975296 kubelet[2223]: E1213 14:21:53.975299 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.975436 kubelet[2223]: E1213 14:21:53.975419 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.975436 kubelet[2223]: W1213 14:21:53.975429 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.975436 kubelet[2223]: E1213 14:21:53.975438 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.975564 kubelet[2223]: E1213 14:21:53.975549 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.975564 kubelet[2223]: W1213 14:21:53.975558 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.975564 kubelet[2223]: E1213 14:21:53.975567 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.975697 kubelet[2223]: E1213 14:21:53.975682 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.975697 kubelet[2223]: W1213 14:21:53.975692 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.975761 kubelet[2223]: E1213 14:21:53.975703 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.975841 kubelet[2223]: E1213 14:21:53.975824 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.975841 kubelet[2223]: W1213 14:21:53.975834 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.975841 kubelet[2223]: E1213 14:21:53.975855 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.976028 kubelet[2223]: E1213 14:21:53.976009 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.976028 kubelet[2223]: W1213 14:21:53.976020 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.976028 kubelet[2223]: E1213 14:21:53.976030 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.976175 kubelet[2223]: E1213 14:21:53.976158 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.976175 kubelet[2223]: W1213 14:21:53.976168 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.976175 kubelet[2223]: E1213 14:21:53.976178 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.976320 kubelet[2223]: E1213 14:21:53.976303 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.976320 kubelet[2223]: W1213 14:21:53.976313 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.976320 kubelet[2223]: E1213 14:21:53.976322 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.976469 kubelet[2223]: E1213 14:21:53.976452 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.976469 kubelet[2223]: W1213 14:21:53.976462 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.976528 kubelet[2223]: E1213 14:21:53.976472 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.976628 kubelet[2223]: E1213 14:21:53.976611 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.976628 kubelet[2223]: W1213 14:21:53.976622 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.976696 kubelet[2223]: E1213 14:21:53.976631 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.976784 kubelet[2223]: E1213 14:21:53.976768 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.976784 kubelet[2223]: W1213 14:21:53.976778 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.976784 kubelet[2223]: E1213 14:21:53.976787 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.976937 kubelet[2223]: E1213 14:21:53.976921 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.976937 kubelet[2223]: W1213 14:21:53.976932 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.977006 kubelet[2223]: E1213 14:21:53.976943 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.977628 kubelet[2223]: E1213 14:21:53.977071 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.977628 kubelet[2223]: W1213 14:21:53.977082 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.977628 kubelet[2223]: E1213 14:21:53.977091 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.988414 kubelet[2223]: E1213 14:21:53.988381 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.988414 kubelet[2223]: W1213 14:21:53.988404 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.988414 kubelet[2223]: E1213 14:21:53.988417 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.988559 kubelet[2223]: I1213 14:21:53.988443 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/edcf1038-965b-4103-930d-3cbf62798dd0-socket-dir\") pod \"csi-node-driver-8gfp4\" (UID: \"edcf1038-965b-4103-930d-3cbf62798dd0\") " pod="calico-system/csi-node-driver-8gfp4" Dec 13 14:21:53.988762 kubelet[2223]: E1213 14:21:53.988746 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.988762 kubelet[2223]: W1213 14:21:53.988761 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.988830 kubelet[2223]: E1213 14:21:53.988778 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.988830 kubelet[2223]: I1213 14:21:53.988796 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/edcf1038-965b-4103-930d-3cbf62798dd0-kubelet-dir\") pod \"csi-node-driver-8gfp4\" (UID: \"edcf1038-965b-4103-930d-3cbf62798dd0\") " pod="calico-system/csi-node-driver-8gfp4" Dec 13 14:21:53.989025 kubelet[2223]: E1213 14:21:53.989010 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.989025 kubelet[2223]: W1213 14:21:53.989024 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.989084 kubelet[2223]: E1213 14:21:53.989045 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.989084 kubelet[2223]: I1213 14:21:53.989065 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/edcf1038-965b-4103-930d-3cbf62798dd0-registration-dir\") pod \"csi-node-driver-8gfp4\" (UID: \"edcf1038-965b-4103-930d-3cbf62798dd0\") " pod="calico-system/csi-node-driver-8gfp4" Dec 13 14:21:53.989277 kubelet[2223]: E1213 14:21:53.989263 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.989277 kubelet[2223]: W1213 14:21:53.989276 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.989338 kubelet[2223]: E1213 14:21:53.989292 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.989338 kubelet[2223]: I1213 14:21:53.989310 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2sx8\" (UniqueName: \"kubernetes.io/projected/edcf1038-965b-4103-930d-3cbf62798dd0-kube-api-access-m2sx8\") pod \"csi-node-driver-8gfp4\" (UID: \"edcf1038-965b-4103-930d-3cbf62798dd0\") " pod="calico-system/csi-node-driver-8gfp4" Dec 13 14:21:53.989493 kubelet[2223]: E1213 14:21:53.989473 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.989493 kubelet[2223]: W1213 14:21:53.989492 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.989555 kubelet[2223]: E1213 14:21:53.989507 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.989555 kubelet[2223]: I1213 14:21:53.989525 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/edcf1038-965b-4103-930d-3cbf62798dd0-varrun\") pod \"csi-node-driver-8gfp4\" (UID: \"edcf1038-965b-4103-930d-3cbf62798dd0\") " pod="calico-system/csi-node-driver-8gfp4" Dec 13 14:21:53.989715 kubelet[2223]: E1213 14:21:53.989699 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.989715 kubelet[2223]: W1213 14:21:53.989713 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.989800 kubelet[2223]: E1213 14:21:53.989778 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.989904 kubelet[2223]: E1213 14:21:53.989890 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.989904 kubelet[2223]: W1213 14:21:53.989903 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.990005 kubelet[2223]: E1213 14:21:53.989994 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.990104 kubelet[2223]: E1213 14:21:53.990093 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.990104 kubelet[2223]: W1213 14:21:53.990104 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.990152 kubelet[2223]: E1213 14:21:53.990115 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.990556 kubelet[2223]: E1213 14:21:53.990536 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.990556 kubelet[2223]: W1213 14:21:53.990554 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.990647 kubelet[2223]: E1213 14:21:53.990569 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.990751 kubelet[2223]: E1213 14:21:53.990738 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.990751 kubelet[2223]: W1213 14:21:53.990750 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.990826 kubelet[2223]: E1213 14:21:53.990765 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.990971 kubelet[2223]: E1213 14:21:53.990954 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.990971 kubelet[2223]: W1213 14:21:53.990966 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.991048 kubelet[2223]: E1213 14:21:53.990977 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.991166 kubelet[2223]: E1213 14:21:53.991151 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.991166 kubelet[2223]: W1213 14:21:53.991163 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.991241 kubelet[2223]: E1213 14:21:53.991173 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.991311 kubelet[2223]: E1213 14:21:53.991297 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.991311 kubelet[2223]: W1213 14:21:53.991307 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.991372 kubelet[2223]: E1213 14:21:53.991316 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.991456 kubelet[2223]: E1213 14:21:53.991443 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.991456 kubelet[2223]: W1213 14:21:53.991452 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.991535 kubelet[2223]: E1213 14:21:53.991462 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:53.992280 kubelet[2223]: E1213 14:21:53.991588 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:53.992280 kubelet[2223]: W1213 14:21:53.991596 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:53.992280 kubelet[2223]: E1213 14:21:53.991605 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.026436 kubelet[2223]: E1213 14:21:54.026407 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:54.027095 env[1320]: time="2024-12-13T14:21:54.027063987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74cc58f5d8-jsqpm,Uid:7e02a5ec-a765-44d6-a382-0b80b7443362,Namespace:calico-system,Attempt:0,}" Dec 13 14:21:54.040616 env[1320]: time="2024-12-13T14:21:54.040549511Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:21:54.040696 env[1320]: time="2024-12-13T14:21:54.040632155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:21:54.040696 env[1320]: time="2024-12-13T14:21:54.040657436Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:21:54.040956 env[1320]: time="2024-12-13T14:21:54.040917647Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a58f1d5a80b8b4bd46c3311aa1b17b35cc0929414491c9247104cdb4f36f2b65 pid=2680 runtime=io.containerd.runc.v2 Dec 13 14:21:54.071934 kubelet[2223]: E1213 14:21:54.071692 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:54.072314 env[1320]: time="2024-12-13T14:21:54.072276760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zlvk7,Uid:ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0,Namespace:calico-system,Attempt:0,}" Dec 13 14:21:54.085891 env[1320]: time="2024-12-13T14:21:54.083894326Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:21:54.085891 env[1320]: time="2024-12-13T14:21:54.083934728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:21:54.085891 env[1320]: time="2024-12-13T14:21:54.083944769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:21:54.085891 env[1320]: time="2024-12-13T14:21:54.084099175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e66047ea3406a845229cb8b2a4470b9468e369202a62c9bf67655928c216c0d4 pid=2715 runtime=io.containerd.runc.v2 Dec 13 14:21:54.090980 kubelet[2223]: E1213 14:21:54.090957 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.090980 kubelet[2223]: W1213 14:21:54.090977 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.091108 kubelet[2223]: E1213 14:21:54.090996 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.091227 kubelet[2223]: E1213 14:21:54.091209 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.091227 kubelet[2223]: W1213 14:21:54.091225 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.091298 kubelet[2223]: E1213 14:21:54.091243 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.091482 kubelet[2223]: E1213 14:21:54.091465 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.091482 kubelet[2223]: W1213 14:21:54.091481 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.091576 kubelet[2223]: E1213 14:21:54.091501 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.092156 kubelet[2223]: E1213 14:21:54.092131 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.092156 kubelet[2223]: W1213 14:21:54.092148 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.092256 kubelet[2223]: E1213 14:21:54.092165 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.092407 kubelet[2223]: E1213 14:21:54.092388 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.092407 kubelet[2223]: W1213 14:21:54.092405 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.092494 kubelet[2223]: E1213 14:21:54.092488 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.092584 kubelet[2223]: E1213 14:21:54.092571 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.092584 kubelet[2223]: W1213 14:21:54.092582 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.092660 kubelet[2223]: E1213 14:21:54.092651 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.092762 kubelet[2223]: E1213 14:21:54.092748 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.092762 kubelet[2223]: W1213 14:21:54.092761 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.092851 kubelet[2223]: E1213 14:21:54.092831 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.092962 kubelet[2223]: E1213 14:21:54.092949 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.092962 kubelet[2223]: W1213 14:21:54.092960 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.092962 kubelet[2223]: E1213 14:21:54.092971 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.093145 kubelet[2223]: E1213 14:21:54.093132 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.093145 kubelet[2223]: W1213 14:21:54.093143 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.093263 kubelet[2223]: E1213 14:21:54.093165 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.093363 kubelet[2223]: E1213 14:21:54.093351 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.093363 kubelet[2223]: W1213 14:21:54.093360 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.093363 kubelet[2223]: E1213 14:21:54.093373 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.093656 kubelet[2223]: E1213 14:21:54.093642 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.093656 kubelet[2223]: W1213 14:21:54.093654 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.093901 kubelet[2223]: E1213 14:21:54.093795 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.094059 kubelet[2223]: E1213 14:21:54.094041 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.094059 kubelet[2223]: W1213 14:21:54.094059 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.094130 kubelet[2223]: E1213 14:21:54.094105 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.094251 kubelet[2223]: E1213 14:21:54.094238 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.094251 kubelet[2223]: W1213 14:21:54.094250 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.094395 kubelet[2223]: E1213 14:21:54.094345 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.094432 kubelet[2223]: E1213 14:21:54.094417 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.094432 kubelet[2223]: W1213 14:21:54.094425 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.094559 kubelet[2223]: E1213 14:21:54.094498 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.094620 kubelet[2223]: E1213 14:21:54.094608 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.094620 kubelet[2223]: W1213 14:21:54.094617 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.094694 kubelet[2223]: E1213 14:21:54.094650 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.094818 kubelet[2223]: E1213 14:21:54.094806 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.094818 kubelet[2223]: W1213 14:21:54.094816 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.094898 kubelet[2223]: E1213 14:21:54.094829 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.095051 kubelet[2223]: E1213 14:21:54.095038 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.095051 kubelet[2223]: W1213 14:21:54.095049 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.095122 kubelet[2223]: E1213 14:21:54.095065 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.095247 kubelet[2223]: E1213 14:21:54.095236 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.095247 kubelet[2223]: W1213 14:21:54.095247 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.095310 kubelet[2223]: E1213 14:21:54.095260 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.095411 kubelet[2223]: E1213 14:21:54.095399 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.095411 kubelet[2223]: W1213 14:21:54.095410 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.095473 kubelet[2223]: E1213 14:21:54.095422 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.095560 kubelet[2223]: E1213 14:21:54.095549 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.095560 kubelet[2223]: W1213 14:21:54.095559 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.095619 kubelet[2223]: E1213 14:21:54.095569 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.095753 kubelet[2223]: E1213 14:21:54.095739 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.095753 kubelet[2223]: W1213 14:21:54.095752 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.095838 kubelet[2223]: E1213 14:21:54.095826 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.095921 kubelet[2223]: E1213 14:21:54.095910 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.095921 kubelet[2223]: W1213 14:21:54.095920 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.096007 kubelet[2223]: E1213 14:21:54.095996 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.096085 kubelet[2223]: E1213 14:21:54.096075 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.096085 kubelet[2223]: W1213 14:21:54.096086 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.096151 kubelet[2223]: E1213 14:21:54.096098 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.096278 kubelet[2223]: E1213 14:21:54.096267 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.096278 kubelet[2223]: W1213 14:21:54.096277 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.096347 kubelet[2223]: E1213 14:21:54.096288 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.096463 kubelet[2223]: E1213 14:21:54.096449 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.096463 kubelet[2223]: W1213 14:21:54.096461 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.096520 kubelet[2223]: E1213 14:21:54.096472 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.103114 env[1320]: time="2024-12-13T14:21:54.103052929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-74cc58f5d8-jsqpm,Uid:7e02a5ec-a765-44d6-a382-0b80b7443362,Namespace:calico-system,Attempt:0,} returns sandbox id \"a58f1d5a80b8b4bd46c3311aa1b17b35cc0929414491c9247104cdb4f36f2b65\"" Dec 13 14:21:54.106335 kubelet[2223]: E1213 14:21:54.106317 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:54.107570 env[1320]: time="2024-12-13T14:21:54.107386910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 14:21:54.108994 kubelet[2223]: E1213 14:21:54.108972 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:54.108994 kubelet[2223]: W1213 14:21:54.108991 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:54.109105 kubelet[2223]: E1213 14:21:54.109007 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:54.154570 env[1320]: time="2024-12-13T14:21:54.154465002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zlvk7,Uid:ad5425b1-a5a3-473f-a6d4-a57fcd02fcc0,Namespace:calico-system,Attempt:0,} returns sandbox id \"e66047ea3406a845229cb8b2a4470b9468e369202a62c9bf67655928c216c0d4\"" Dec 13 14:21:54.155218 kubelet[2223]: E1213 14:21:54.155194 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:54.654000 audit[2783]: NETFILTER_CFG table=filter:93 family=2 entries=17 op=nft_register_rule pid=2783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:21:54.654000 audit[2783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6652 a0=3 a1=fffffe9e0ef0 a2=0 a3=1 items=0 ppid=2387 pid=2783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:54.654000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:21:54.662000 audit[2783]: NETFILTER_CFG table=nat:94 family=2 entries=12 op=nft_register_rule pid=2783 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:21:54.662000 audit[2783]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffe9e0ef0 a2=0 a3=1 items=0 ppid=2387 pid=2783 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:21:54.662000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:21:55.120095 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3057669686.mount: Deactivated successfully. Dec 13 14:21:55.574605 kubelet[2223]: E1213 14:21:55.574501 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gfp4" podUID="edcf1038-965b-4103-930d-3cbf62798dd0" Dec 13 14:21:55.729516 env[1320]: time="2024-12-13T14:21:55.729449006Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:55.730862 env[1320]: time="2024-12-13T14:21:55.730823181Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:55.732907 env[1320]: time="2024-12-13T14:21:55.732880503Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/typha:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:55.734103 env[1320]: time="2024-12-13T14:21:55.734071551Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:55.734783 env[1320]: time="2024-12-13T14:21:55.734752098Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:1d1fc316829ae1650b0b1629b54232520f297e7c3b1444eecd290ae088902a28\"" Dec 13 14:21:55.740280 env[1320]: time="2024-12-13T14:21:55.738046630Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 14:21:55.755344 env[1320]: time="2024-12-13T14:21:55.755248600Z" level=info msg="CreateContainer within sandbox \"a58f1d5a80b8b4bd46c3311aa1b17b35cc0929414491c9247104cdb4f36f2b65\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 14:21:55.765850 env[1320]: time="2024-12-13T14:21:55.765805183Z" level=info msg="CreateContainer within sandbox \"a58f1d5a80b8b4bd46c3311aa1b17b35cc0929414491c9247104cdb4f36f2b65\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c4c6b6346911ea3328279c6b91349b0980c29cfaf68d967910160e402b70b9d8\"" Dec 13 14:21:55.766412 env[1320]: time="2024-12-13T14:21:55.766382766Z" level=info msg="StartContainer for \"c4c6b6346911ea3328279c6b91349b0980c29cfaf68d967910160e402b70b9d8\"" Dec 13 14:21:55.833181 env[1320]: time="2024-12-13T14:21:55.833077559Z" level=info msg="StartContainer for \"c4c6b6346911ea3328279c6b91349b0980c29cfaf68d967910160e402b70b9d8\" returns successfully" Dec 13 14:21:56.630217 kubelet[2223]: E1213 14:21:56.630180 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:56.642000 kubelet[2223]: I1213 14:21:56.641971 2223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-74cc58f5d8-jsqpm" podStartSLOduration=2.013782434 podStartE2EDuration="3.641924174s" podCreationTimestamp="2024-12-13 14:21:53 +0000 UTC" firstStartedPulling="2024-12-13 14:21:54.10714394 +0000 UTC m=+21.640085644" lastFinishedPulling="2024-12-13 14:21:55.73528568 +0000 UTC m=+23.268227384" observedRunningTime="2024-12-13 14:21:56.641699765 +0000 UTC m=+24.174641469" watchObservedRunningTime="2024-12-13 14:21:56.641924174 +0000 UTC m=+24.174865878" Dec 13 14:21:56.697055 kubelet[2223]: E1213 14:21:56.697015 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.697055 kubelet[2223]: W1213 14:21:56.697050 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.697186 kubelet[2223]: E1213 14:21:56.697075 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.697307 kubelet[2223]: E1213 14:21:56.697280 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.697307 kubelet[2223]: W1213 14:21:56.697292 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.697307 kubelet[2223]: E1213 14:21:56.697303 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.697491 kubelet[2223]: E1213 14:21:56.697473 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.697491 kubelet[2223]: W1213 14:21:56.697483 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.697548 kubelet[2223]: E1213 14:21:56.697495 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.697652 kubelet[2223]: E1213 14:21:56.697634 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.697652 kubelet[2223]: W1213 14:21:56.697645 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.697706 kubelet[2223]: E1213 14:21:56.697654 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.697804 kubelet[2223]: E1213 14:21:56.697791 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.697804 kubelet[2223]: W1213 14:21:56.697802 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.697867 kubelet[2223]: E1213 14:21:56.697812 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.697981 kubelet[2223]: E1213 14:21:56.697970 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.698009 kubelet[2223]: W1213 14:21:56.697981 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.698009 kubelet[2223]: E1213 14:21:56.697991 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.698166 kubelet[2223]: E1213 14:21:56.698146 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.698166 kubelet[2223]: W1213 14:21:56.698156 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.698166 kubelet[2223]: E1213 14:21:56.698166 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.698312 kubelet[2223]: E1213 14:21:56.698302 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.698363 kubelet[2223]: W1213 14:21:56.698312 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.698363 kubelet[2223]: E1213 14:21:56.698321 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.698459 kubelet[2223]: E1213 14:21:56.698448 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.698459 kubelet[2223]: W1213 14:21:56.698458 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.698507 kubelet[2223]: E1213 14:21:56.698467 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.698594 kubelet[2223]: E1213 14:21:56.698584 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.698594 kubelet[2223]: W1213 14:21:56.698593 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.698652 kubelet[2223]: E1213 14:21:56.698602 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.698728 kubelet[2223]: E1213 14:21:56.698718 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.698728 kubelet[2223]: W1213 14:21:56.698727 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.698787 kubelet[2223]: E1213 14:21:56.698737 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.698885 kubelet[2223]: E1213 14:21:56.698875 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.698918 kubelet[2223]: W1213 14:21:56.698886 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.698918 kubelet[2223]: E1213 14:21:56.698895 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.699031 kubelet[2223]: E1213 14:21:56.699021 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.699031 kubelet[2223]: W1213 14:21:56.699030 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.699077 kubelet[2223]: E1213 14:21:56.699041 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.699172 kubelet[2223]: E1213 14:21:56.699163 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.699172 kubelet[2223]: W1213 14:21:56.699172 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.699220 kubelet[2223]: E1213 14:21:56.699181 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.699304 kubelet[2223]: E1213 14:21:56.699296 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.699304 kubelet[2223]: W1213 14:21:56.699304 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.699352 kubelet[2223]: E1213 14:21:56.699313 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.709644 kubelet[2223]: E1213 14:21:56.709620 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.709644 kubelet[2223]: W1213 14:21:56.709638 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.709644 kubelet[2223]: E1213 14:21:56.709653 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.709834 kubelet[2223]: E1213 14:21:56.709820 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.709834 kubelet[2223]: W1213 14:21:56.709832 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.709917 kubelet[2223]: E1213 14:21:56.709882 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.710060 kubelet[2223]: E1213 14:21:56.710032 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.710060 kubelet[2223]: W1213 14:21:56.710045 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.710060 kubelet[2223]: E1213 14:21:56.710062 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.710286 kubelet[2223]: E1213 14:21:56.710273 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.710286 kubelet[2223]: W1213 14:21:56.710285 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.710364 kubelet[2223]: E1213 14:21:56.710302 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.712362 kubelet[2223]: E1213 14:21:56.712337 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.712362 kubelet[2223]: W1213 14:21:56.712349 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.712362 kubelet[2223]: E1213 14:21:56.712365 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.712867 kubelet[2223]: E1213 14:21:56.712518 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.712867 kubelet[2223]: W1213 14:21:56.712526 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.712867 kubelet[2223]: E1213 14:21:56.712536 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.712867 kubelet[2223]: E1213 14:21:56.712737 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.712867 kubelet[2223]: W1213 14:21:56.712745 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.712867 kubelet[2223]: E1213 14:21:56.712757 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.713415 kubelet[2223]: E1213 14:21:56.713204 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.713415 kubelet[2223]: W1213 14:21:56.713222 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.713415 kubelet[2223]: E1213 14:21:56.713236 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.713787 kubelet[2223]: E1213 14:21:56.713443 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.713787 kubelet[2223]: W1213 14:21:56.713453 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.713787 kubelet[2223]: E1213 14:21:56.713482 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.713787 kubelet[2223]: E1213 14:21:56.713632 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.713787 kubelet[2223]: W1213 14:21:56.713640 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.713787 kubelet[2223]: E1213 14:21:56.713653 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.714634 kubelet[2223]: E1213 14:21:56.713901 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.714634 kubelet[2223]: W1213 14:21:56.713915 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.714634 kubelet[2223]: E1213 14:21:56.713935 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.714919 kubelet[2223]: E1213 14:21:56.714902 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.715672 kubelet[2223]: W1213 14:21:56.715536 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.716256 kubelet[2223]: E1213 14:21:56.716123 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.716256 kubelet[2223]: W1213 14:21:56.716138 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.716256 kubelet[2223]: E1213 14:21:56.716155 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.716821 kubelet[2223]: E1213 14:21:56.716342 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.716821 kubelet[2223]: W1213 14:21:56.716350 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.716821 kubelet[2223]: E1213 14:21:56.716361 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.716821 kubelet[2223]: E1213 14:21:56.716684 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.716821 kubelet[2223]: W1213 14:21:56.716693 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.716821 kubelet[2223]: E1213 14:21:56.716706 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.717192 kubelet[2223]: E1213 14:21:56.717171 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.717794 kubelet[2223]: E1213 14:21:56.717780 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.717913 kubelet[2223]: W1213 14:21:56.717899 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.717983 kubelet[2223]: E1213 14:21:56.717973 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.718268 kubelet[2223]: E1213 14:21:56.718254 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.718362 kubelet[2223]: W1213 14:21:56.718349 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.718434 kubelet[2223]: E1213 14:21:56.718425 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.718875 kubelet[2223]: E1213 14:21:56.718858 2223 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 14:21:56.718963 kubelet[2223]: W1213 14:21:56.718949 2223 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 14:21:56.719046 kubelet[2223]: E1213 14:21:56.719035 2223 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 14:21:56.897405 env[1320]: time="2024-12-13T14:21:56.897293817Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:56.899796 env[1320]: time="2024-12-13T14:21:56.899747632Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:56.901267 env[1320]: time="2024-12-13T14:21:56.901241689Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:56.902429 env[1320]: time="2024-12-13T14:21:56.902408134Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:21:56.903206 env[1320]: time="2024-12-13T14:21:56.903099680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:ece9bca32e64e726de8bbfc9e175a3ca91e0881cd40352bfcd1d107411f4f348\"" Dec 13 14:21:56.905335 env[1320]: time="2024-12-13T14:21:56.905303005Z" level=info msg="CreateContainer within sandbox \"e66047ea3406a845229cb8b2a4470b9468e369202a62c9bf67655928c216c0d4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 14:21:56.915763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2629347738.mount: Deactivated successfully. Dec 13 14:21:56.919214 env[1320]: time="2024-12-13T14:21:56.919181818Z" level=info msg="CreateContainer within sandbox \"e66047ea3406a845229cb8b2a4470b9468e369202a62c9bf67655928c216c0d4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"19d118178ccd2a06aadb1f7ff4c5f96990bcf4988508512b1d89492fa942e388\"" Dec 13 14:21:56.919581 env[1320]: time="2024-12-13T14:21:56.919555632Z" level=info msg="StartContainer for \"19d118178ccd2a06aadb1f7ff4c5f96990bcf4988508512b1d89492fa942e388\"" Dec 13 14:21:56.996367 env[1320]: time="2024-12-13T14:21:56.996326299Z" level=info msg="StartContainer for \"19d118178ccd2a06aadb1f7ff4c5f96990bcf4988508512b1d89492fa942e388\" returns successfully" Dec 13 14:21:57.045380 env[1320]: time="2024-12-13T14:21:57.045336950Z" level=info msg="shim disconnected" id=19d118178ccd2a06aadb1f7ff4c5f96990bcf4988508512b1d89492fa942e388 Dec 13 14:21:57.045594 env[1320]: time="2024-12-13T14:21:57.045576199Z" level=warning msg="cleaning up after shim disconnected" id=19d118178ccd2a06aadb1f7ff4c5f96990bcf4988508512b1d89492fa942e388 namespace=k8s.io Dec 13 14:21:57.045662 env[1320]: time="2024-12-13T14:21:57.045649282Z" level=info msg="cleaning up dead shim" Dec 13 14:21:57.052219 env[1320]: time="2024-12-13T14:21:57.052186483Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:21:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2905 runtime=io.containerd.runc.v2\n" Dec 13 14:21:57.573544 kubelet[2223]: E1213 14:21:57.573498 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gfp4" podUID="edcf1038-965b-4103-930d-3cbf62798dd0" Dec 13 14:21:57.631434 kubelet[2223]: I1213 14:21:57.631404 2223 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:21:57.631755 kubelet[2223]: E1213 14:21:57.631663 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:57.632703 kubelet[2223]: E1213 14:21:57.631972 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:21:57.635873 env[1320]: time="2024-12-13T14:21:57.634405553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 14:21:57.913517 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19d118178ccd2a06aadb1f7ff4c5f96990bcf4988508512b1d89492fa942e388-rootfs.mount: Deactivated successfully. Dec 13 14:21:59.573443 kubelet[2223]: E1213 14:21:59.573397 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gfp4" podUID="edcf1038-965b-4103-930d-3cbf62798dd0" Dec 13 14:22:00.863838 systemd[1]: Started sshd@7-10.0.0.138:22-10.0.0.1:54474.service. Dec 13 14:22:00.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.138:22-10.0.0.1:54474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:00.867429 kernel: kauditd_printk_skb: 8 callbacks suppressed Dec 13 14:22:00.867509 kernel: audit: type=1130 audit(1734099720.862:295): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.138:22-10.0.0.1:54474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:00.916000 audit[2928]: USER_ACCT pid=2928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:00.918781 sshd[2928]: Accepted publickey for core from 10.0.0.1 port 54474 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:00.921859 kernel: audit: type=1101 audit(1734099720.916:296): pid=2928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:00.920000 audit[2928]: CRED_ACQ pid=2928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:00.925943 sshd[2928]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:00.927475 kernel: audit: type=1103 audit(1734099720.920:297): pid=2928 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:00.927531 kernel: audit: type=1006 audit(1734099720.920:298): pid=2928 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=8 res=1 Dec 13 14:22:00.927557 kernel: audit: type=1300 audit(1734099720.920:298): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffda5c4f90 a2=3 a3=1 items=0 ppid=1 pid=2928 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:00.920000 audit[2928]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffda5c4f90 a2=3 a3=1 items=0 ppid=1 pid=2928 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:00.920000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:00.931669 kernel: audit: type=1327 audit(1734099720.920:298): proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:00.934811 systemd-logind[1303]: New session 8 of user core. Dec 13 14:22:00.935419 systemd[1]: Started session-8.scope. Dec 13 14:22:00.937000 audit[2928]: USER_START pid=2928 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:00.938000 audit[2931]: CRED_ACQ pid=2931 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:00.945666 kernel: audit: type=1105 audit(1734099720.937:299): pid=2928 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:00.945719 kernel: audit: type=1103 audit(1734099720.938:300): pid=2931 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:01.064192 sshd[2928]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:01.063000 audit[2928]: USER_END pid=2928 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:01.066435 systemd[1]: sshd@7-10.0.0.138:22-10.0.0.1:54474.service: Deactivated successfully. Dec 13 14:22:01.067216 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 14:22:01.063000 audit[2928]: CRED_DISP pid=2928 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:01.072517 kernel: audit: type=1106 audit(1734099721.063:301): pid=2928 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:01.072579 kernel: audit: type=1104 audit(1734099721.063:302): pid=2928 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:01.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.0.138:22-10.0.0.1:54474 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:01.073165 systemd-logind[1303]: Session 8 logged out. Waiting for processes to exit. Dec 13 14:22:01.074587 systemd-logind[1303]: Removed session 8. Dec 13 14:22:01.572935 kubelet[2223]: E1213 14:22:01.572881 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8gfp4" podUID="edcf1038-965b-4103-930d-3cbf62798dd0" Dec 13 14:22:01.634004 env[1320]: time="2024-12-13T14:22:01.633963227Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:01.635129 env[1320]: time="2024-12-13T14:22:01.635095583Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:01.636870 env[1320]: time="2024-12-13T14:22:01.636824397Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/cni:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:01.638141 env[1320]: time="2024-12-13T14:22:01.638111917Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:01.638702 env[1320]: time="2024-12-13T14:22:01.638664535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:e5ca62af4ff61b88f55fe4e0d7723151103d3f6a470fd4ebb311a2de27a9597f\"" Dec 13 14:22:01.641013 env[1320]: time="2024-12-13T14:22:01.640985128Z" level=info msg="CreateContainer within sandbox \"e66047ea3406a845229cb8b2a4470b9468e369202a62c9bf67655928c216c0d4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 14:22:01.653732 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2497444694.mount: Deactivated successfully. Dec 13 14:22:01.656601 env[1320]: time="2024-12-13T14:22:01.656562577Z" level=info msg="CreateContainer within sandbox \"e66047ea3406a845229cb8b2a4470b9468e369202a62c9bf67655928c216c0d4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"aecce9b7c1b4fbc8e4e32ec9c885ca089f380244ef6f490510ca5f3c3231b237\"" Dec 13 14:22:01.657231 env[1320]: time="2024-12-13T14:22:01.657203477Z" level=info msg="StartContainer for \"aecce9b7c1b4fbc8e4e32ec9c885ca089f380244ef6f490510ca5f3c3231b237\"" Dec 13 14:22:01.731083 env[1320]: time="2024-12-13T14:22:01.731042916Z" level=info msg="StartContainer for \"aecce9b7c1b4fbc8e4e32ec9c885ca089f380244ef6f490510ca5f3c3231b237\" returns successfully" Dec 13 14:22:02.237401 env[1320]: time="2024-12-13T14:22:02.237349950Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/calico-kubeconfig\": WRITE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 14:22:02.243116 kubelet[2223]: I1213 14:22:02.243072 2223 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 14:22:02.263433 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aecce9b7c1b4fbc8e4e32ec9c885ca089f380244ef6f490510ca5f3c3231b237-rootfs.mount: Deactivated successfully. Dec 13 14:22:02.288151 kubelet[2223]: I1213 14:22:02.288094 2223 topology_manager.go:215] "Topology Admit Handler" podUID="f9893a5a-eda8-409b-b506-579bb2498aa1" podNamespace="kube-system" podName="coredns-76f75df574-hzwcj" Dec 13 14:22:02.288565 kubelet[2223]: I1213 14:22:02.288500 2223 topology_manager.go:215] "Topology Admit Handler" podUID="accb085d-f789-4c9c-a736-d74c6e73b549" podNamespace="calico-system" podName="calico-kube-controllers-7778b75b4d-fztb7" Dec 13 14:22:02.288674 kubelet[2223]: I1213 14:22:02.288649 2223 topology_manager.go:215] "Topology Admit Handler" podUID="610c2862-cbee-4137-8255-b514a33ef2be" podNamespace="calico-apiserver" podName="calico-apiserver-7cd595757-gc4rp" Dec 13 14:22:02.289153 kubelet[2223]: I1213 14:22:02.288805 2223 topology_manager.go:215] "Topology Admit Handler" podUID="08600e9c-2e7c-44be-b230-0c231e6c0b50" podNamespace="kube-system" podName="coredns-76f75df574-bxn7n" Dec 13 14:22:02.289284 kubelet[2223]: I1213 14:22:02.289267 2223 topology_manager.go:215] "Topology Admit Handler" podUID="4349a5e2-a677-4bbe-9d6f-1535050c8cda" podNamespace="calico-apiserver" podName="calico-apiserver-7cd595757-m2x9g" Dec 13 14:22:02.290050 env[1320]: time="2024-12-13T14:22:02.289925182Z" level=info msg="shim disconnected" id=aecce9b7c1b4fbc8e4e32ec9c885ca089f380244ef6f490510ca5f3c3231b237 Dec 13 14:22:02.290144 env[1320]: time="2024-12-13T14:22:02.290053385Z" level=warning msg="cleaning up after shim disconnected" id=aecce9b7c1b4fbc8e4e32ec9c885ca089f380244ef6f490510ca5f3c3231b237 namespace=k8s.io Dec 13 14:22:02.290144 env[1320]: time="2024-12-13T14:22:02.290101507Z" level=info msg="cleaning up dead shim" Dec 13 14:22:02.303530 env[1320]: time="2024-12-13T14:22:02.303489472Z" level=warning msg="cleanup warnings time=\"2024-12-13T14:22:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2991 runtime=io.containerd.runc.v2\n" Dec 13 14:22:02.351685 kubelet[2223]: I1213 14:22:02.351643 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2g6ww\" (UniqueName: \"kubernetes.io/projected/610c2862-cbee-4137-8255-b514a33ef2be-kube-api-access-2g6ww\") pod \"calico-apiserver-7cd595757-gc4rp\" (UID: \"610c2862-cbee-4137-8255-b514a33ef2be\") " pod="calico-apiserver/calico-apiserver-7cd595757-gc4rp" Dec 13 14:22:02.351802 kubelet[2223]: I1213 14:22:02.351739 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f9893a5a-eda8-409b-b506-579bb2498aa1-config-volume\") pod \"coredns-76f75df574-hzwcj\" (UID: \"f9893a5a-eda8-409b-b506-579bb2498aa1\") " pod="kube-system/coredns-76f75df574-hzwcj" Dec 13 14:22:02.351802 kubelet[2223]: I1213 14:22:02.351783 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4349a5e2-a677-4bbe-9d6f-1535050c8cda-calico-apiserver-certs\") pod \"calico-apiserver-7cd595757-m2x9g\" (UID: \"4349a5e2-a677-4bbe-9d6f-1535050c8cda\") " pod="calico-apiserver/calico-apiserver-7cd595757-m2x9g" Dec 13 14:22:02.351883 kubelet[2223]: I1213 14:22:02.351836 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08600e9c-2e7c-44be-b230-0c231e6c0b50-config-volume\") pod \"coredns-76f75df574-bxn7n\" (UID: \"08600e9c-2e7c-44be-b230-0c231e6c0b50\") " pod="kube-system/coredns-76f75df574-bxn7n" Dec 13 14:22:02.351922 kubelet[2223]: I1213 14:22:02.351904 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgwj9\" (UniqueName: \"kubernetes.io/projected/4349a5e2-a677-4bbe-9d6f-1535050c8cda-kube-api-access-dgwj9\") pod \"calico-apiserver-7cd595757-m2x9g\" (UID: \"4349a5e2-a677-4bbe-9d6f-1535050c8cda\") " pod="calico-apiserver/calico-apiserver-7cd595757-m2x9g" Dec 13 14:22:02.352025 kubelet[2223]: I1213 14:22:02.352002 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/610c2862-cbee-4137-8255-b514a33ef2be-calico-apiserver-certs\") pod \"calico-apiserver-7cd595757-gc4rp\" (UID: \"610c2862-cbee-4137-8255-b514a33ef2be\") " pod="calico-apiserver/calico-apiserver-7cd595757-gc4rp" Dec 13 14:22:02.352067 kubelet[2223]: I1213 14:22:02.352032 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/accb085d-f789-4c9c-a736-d74c6e73b549-tigera-ca-bundle\") pod \"calico-kube-controllers-7778b75b4d-fztb7\" (UID: \"accb085d-f789-4c9c-a736-d74c6e73b549\") " pod="calico-system/calico-kube-controllers-7778b75b4d-fztb7" Dec 13 14:22:02.352067 kubelet[2223]: I1213 14:22:02.352056 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4pzbm\" (UniqueName: \"kubernetes.io/projected/accb085d-f789-4c9c-a736-d74c6e73b549-kube-api-access-4pzbm\") pod \"calico-kube-controllers-7778b75b4d-fztb7\" (UID: \"accb085d-f789-4c9c-a736-d74c6e73b549\") " pod="calico-system/calico-kube-controllers-7778b75b4d-fztb7" Dec 13 14:22:02.352123 kubelet[2223]: I1213 14:22:02.352084 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgz62\" (UniqueName: \"kubernetes.io/projected/08600e9c-2e7c-44be-b230-0c231e6c0b50-kube-api-access-rgz62\") pod \"coredns-76f75df574-bxn7n\" (UID: \"08600e9c-2e7c-44be-b230-0c231e6c0b50\") " pod="kube-system/coredns-76f75df574-bxn7n" Dec 13 14:22:02.352123 kubelet[2223]: I1213 14:22:02.352106 2223 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h85n9\" (UniqueName: \"kubernetes.io/projected/f9893a5a-eda8-409b-b506-579bb2498aa1-kube-api-access-h85n9\") pod \"coredns-76f75df574-hzwcj\" (UID: \"f9893a5a-eda8-409b-b506-579bb2498aa1\") " pod="kube-system/coredns-76f75df574-hzwcj" Dec 13 14:22:02.591470 kubelet[2223]: E1213 14:22:02.591446 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:02.592489 env[1320]: time="2024-12-13T14:22:02.592450818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hzwcj,Uid:f9893a5a-eda8-409b-b506-579bb2498aa1,Namespace:kube-system,Attempt:0,}" Dec 13 14:22:02.595340 kubelet[2223]: E1213 14:22:02.595318 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:02.595991 env[1320]: time="2024-12-13T14:22:02.595962164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bxn7n,Uid:08600e9c-2e7c-44be-b230-0c231e6c0b50,Namespace:kube-system,Attempt:0,}" Dec 13 14:22:02.601379 env[1320]: time="2024-12-13T14:22:02.601342287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd595757-gc4rp,Uid:610c2862-cbee-4137-8255-b514a33ef2be,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:22:02.601701 env[1320]: time="2024-12-13T14:22:02.601676017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd595757-m2x9g,Uid:4349a5e2-a677-4bbe-9d6f-1535050c8cda,Namespace:calico-apiserver,Attempt:0,}" Dec 13 14:22:02.606450 env[1320]: time="2024-12-13T14:22:02.606420200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7778b75b4d-fztb7,Uid:accb085d-f789-4c9c-a736-d74c6e73b549,Namespace:calico-system,Attempt:0,}" Dec 13 14:22:02.644203 kubelet[2223]: E1213 14:22:02.643934 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:02.657882 env[1320]: time="2024-12-13T14:22:02.647634928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 14:22:02.880795 env[1320]: time="2024-12-13T14:22:02.880640620Z" level=error msg="Failed to destroy network for sandbox \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.881400 env[1320]: time="2024-12-13T14:22:02.881344121Z" level=error msg="encountered an error cleaning up failed sandbox \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.881464 env[1320]: time="2024-12-13T14:22:02.881408483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7778b75b4d-fztb7,Uid:accb085d-f789-4c9c-a736-d74c6e73b549,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.881751 kubelet[2223]: E1213 14:22:02.881719 2223 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.881863 kubelet[2223]: E1213 14:22:02.881789 2223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7778b75b4d-fztb7" Dec 13 14:22:02.881863 kubelet[2223]: E1213 14:22:02.881810 2223 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7778b75b4d-fztb7" Dec 13 14:22:02.881983 kubelet[2223]: E1213 14:22:02.881886 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7778b75b4d-fztb7_calico-system(accb085d-f789-4c9c-a736-d74c6e73b549)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7778b75b4d-fztb7_calico-system(accb085d-f789-4c9c-a736-d74c6e73b549)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7778b75b4d-fztb7" podUID="accb085d-f789-4c9c-a736-d74c6e73b549" Dec 13 14:22:02.889299 env[1320]: time="2024-12-13T14:22:02.889245480Z" level=error msg="Failed to destroy network for sandbox \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.889737 env[1320]: time="2024-12-13T14:22:02.889703734Z" level=error msg="encountered an error cleaning up failed sandbox \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.889879 env[1320]: time="2024-12-13T14:22:02.889833538Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd595757-gc4rp,Uid:610c2862-cbee-4137-8255-b514a33ef2be,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.890565 kubelet[2223]: E1213 14:22:02.890197 2223 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.890565 kubelet[2223]: E1213 14:22:02.890258 2223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd595757-gc4rp" Dec 13 14:22:02.890565 kubelet[2223]: E1213 14:22:02.890279 2223 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd595757-gc4rp" Dec 13 14:22:02.890709 kubelet[2223]: E1213 14:22:02.890342 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd595757-gc4rp_calico-apiserver(610c2862-cbee-4137-8255-b514a33ef2be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd595757-gc4rp_calico-apiserver(610c2862-cbee-4137-8255-b514a33ef2be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd595757-gc4rp" podUID="610c2862-cbee-4137-8255-b514a33ef2be" Dec 13 14:22:02.891109 env[1320]: time="2024-12-13T14:22:02.891068856Z" level=error msg="Failed to destroy network for sandbox \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.891488 env[1320]: time="2024-12-13T14:22:02.891455187Z" level=error msg="encountered an error cleaning up failed sandbox \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.891611 env[1320]: time="2024-12-13T14:22:02.891576271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hzwcj,Uid:f9893a5a-eda8-409b-b506-579bb2498aa1,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.892005 kubelet[2223]: E1213 14:22:02.891840 2223 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.892005 kubelet[2223]: E1213 14:22:02.891892 2223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hzwcj" Dec 13 14:22:02.892005 kubelet[2223]: E1213 14:22:02.891917 2223 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-hzwcj" Dec 13 14:22:02.892145 kubelet[2223]: E1213 14:22:02.891971 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-hzwcj_kube-system(f9893a5a-eda8-409b-b506-579bb2498aa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-hzwcj_kube-system(f9893a5a-eda8-409b-b506-579bb2498aa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hzwcj" podUID="f9893a5a-eda8-409b-b506-579bb2498aa1" Dec 13 14:22:02.892917 env[1320]: time="2024-12-13T14:22:02.892871110Z" level=error msg="Failed to destroy network for sandbox \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.893210 env[1320]: time="2024-12-13T14:22:02.893173239Z" level=error msg="encountered an error cleaning up failed sandbox \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.893277 env[1320]: time="2024-12-13T14:22:02.893250402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd595757-m2x9g,Uid:4349a5e2-a677-4bbe-9d6f-1535050c8cda,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.893494 kubelet[2223]: E1213 14:22:02.893473 2223 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.893548 kubelet[2223]: E1213 14:22:02.893517 2223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd595757-m2x9g" Dec 13 14:22:02.893548 kubelet[2223]: E1213 14:22:02.893536 2223 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7cd595757-m2x9g" Dec 13 14:22:02.893612 kubelet[2223]: E1213 14:22:02.893585 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7cd595757-m2x9g_calico-apiserver(4349a5e2-a677-4bbe-9d6f-1535050c8cda)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7cd595757-m2x9g_calico-apiserver(4349a5e2-a677-4bbe-9d6f-1535050c8cda)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd595757-m2x9g" podUID="4349a5e2-a677-4bbe-9d6f-1535050c8cda" Dec 13 14:22:02.903974 env[1320]: time="2024-12-13T14:22:02.903920805Z" level=error msg="Failed to destroy network for sandbox \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.904316 env[1320]: time="2024-12-13T14:22:02.904281415Z" level=error msg="encountered an error cleaning up failed sandbox \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.904430 env[1320]: time="2024-12-13T14:22:02.904401819Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bxn7n,Uid:08600e9c-2e7c-44be-b230-0c231e6c0b50,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.904671 kubelet[2223]: E1213 14:22:02.904651 2223 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:02.904749 kubelet[2223]: E1213 14:22:02.904695 2223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bxn7n" Dec 13 14:22:02.904749 kubelet[2223]: E1213 14:22:02.904714 2223 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-bxn7n" Dec 13 14:22:02.904809 kubelet[2223]: E1213 14:22:02.904760 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-bxn7n_kube-system(08600e9c-2e7c-44be-b230-0c231e6c0b50)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-bxn7n_kube-system(08600e9c-2e7c-44be-b230-0c231e6c0b50)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bxn7n" podUID="08600e9c-2e7c-44be-b230-0c231e6c0b50" Dec 13 14:22:03.575214 env[1320]: time="2024-12-13T14:22:03.575172584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8gfp4,Uid:edcf1038-965b-4103-930d-3cbf62798dd0,Namespace:calico-system,Attempt:0,}" Dec 13 14:22:03.621775 env[1320]: time="2024-12-13T14:22:03.621723863Z" level=error msg="Failed to destroy network for sandbox \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:03.622076 env[1320]: time="2024-12-13T14:22:03.622049352Z" level=error msg="encountered an error cleaning up failed sandbox \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:03.622126 env[1320]: time="2024-12-13T14:22:03.622096754Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8gfp4,Uid:edcf1038-965b-4103-930d-3cbf62798dd0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:03.622441 kubelet[2223]: E1213 14:22:03.622298 2223 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:03.622441 kubelet[2223]: E1213 14:22:03.622346 2223 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8gfp4" Dec 13 14:22:03.622441 kubelet[2223]: E1213 14:22:03.622364 2223 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8gfp4" Dec 13 14:22:03.623801 kubelet[2223]: E1213 14:22:03.622419 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8gfp4_calico-system(edcf1038-965b-4103-930d-3cbf62798dd0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8gfp4_calico-system(edcf1038-965b-4103-930d-3cbf62798dd0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8gfp4" podUID="edcf1038-965b-4103-930d-3cbf62798dd0" Dec 13 14:22:03.646094 kubelet[2223]: I1213 14:22:03.646068 2223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:03.646698 env[1320]: time="2024-12-13T14:22:03.646649230Z" level=info msg="StopPodSandbox for \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\"" Dec 13 14:22:03.649792 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60-shm.mount: Deactivated successfully. Dec 13 14:22:03.649938 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b-shm.mount: Deactivated successfully. Dec 13 14:22:03.650036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f-shm.mount: Deactivated successfully. Dec 13 14:22:03.650109 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8-shm.mount: Deactivated successfully. Dec 13 14:22:03.650654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c-shm.mount: Deactivated successfully. Dec 13 14:22:03.652505 kubelet[2223]: I1213 14:22:03.652414 2223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:03.653563 env[1320]: time="2024-12-13T14:22:03.653062177Z" level=info msg="StopPodSandbox for \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\"" Dec 13 14:22:03.653644 kubelet[2223]: I1213 14:22:03.653568 2223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:03.654296 env[1320]: time="2024-12-13T14:22:03.654048726Z" level=info msg="StopPodSandbox for \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\"" Dec 13 14:22:03.655049 kubelet[2223]: I1213 14:22:03.655018 2223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:03.656012 env[1320]: time="2024-12-13T14:22:03.655442447Z" level=info msg="StopPodSandbox for \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\"" Dec 13 14:22:03.656684 kubelet[2223]: I1213 14:22:03.656660 2223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:03.657456 env[1320]: time="2024-12-13T14:22:03.657184778Z" level=info msg="StopPodSandbox for \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\"" Dec 13 14:22:03.658082 kubelet[2223]: I1213 14:22:03.658055 2223 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:03.658740 env[1320]: time="2024-12-13T14:22:03.658716103Z" level=info msg="StopPodSandbox for \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\"" Dec 13 14:22:03.681366 env[1320]: time="2024-12-13T14:22:03.681306282Z" level=error msg="StopPodSandbox for \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\" failed" error="failed to destroy network for sandbox \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:03.682883 kubelet[2223]: E1213 14:22:03.682836 2223 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:03.682987 kubelet[2223]: E1213 14:22:03.682952 2223 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b"} Dec 13 14:22:03.683032 kubelet[2223]: E1213 14:22:03.683006 2223 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"610c2862-cbee-4137-8255-b514a33ef2be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:22:03.683098 kubelet[2223]: E1213 14:22:03.683036 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"610c2862-cbee-4137-8255-b514a33ef2be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd595757-gc4rp" podUID="610c2862-cbee-4137-8255-b514a33ef2be" Dec 13 14:22:03.694389 env[1320]: time="2024-12-13T14:22:03.694320822Z" level=error msg="StopPodSandbox for \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\" failed" error="failed to destroy network for sandbox \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:03.694613 kubelet[2223]: E1213 14:22:03.694588 2223 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:03.694677 kubelet[2223]: E1213 14:22:03.694634 2223 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60"} Dec 13 14:22:03.694677 kubelet[2223]: E1213 14:22:03.694668 2223 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"accb085d-f789-4c9c-a736-d74c6e73b549\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:22:03.694765 kubelet[2223]: E1213 14:22:03.694700 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"accb085d-f789-4c9c-a736-d74c6e73b549\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7778b75b4d-fztb7" podUID="accb085d-f789-4c9c-a736-d74c6e73b549" Dec 13 14:22:03.701590 env[1320]: time="2024-12-13T14:22:03.701536633Z" level=error msg="StopPodSandbox for \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\" failed" error="failed to destroy network for sandbox \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:03.701793 kubelet[2223]: E1213 14:22:03.701769 2223 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:03.701860 kubelet[2223]: E1213 14:22:03.701810 2223 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f"} Dec 13 14:22:03.701890 kubelet[2223]: E1213 14:22:03.701861 2223 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4349a5e2-a677-4bbe-9d6f-1535050c8cda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:22:03.701950 kubelet[2223]: E1213 14:22:03.701890 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4349a5e2-a677-4bbe-9d6f-1535050c8cda\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7cd595757-m2x9g" podUID="4349a5e2-a677-4bbe-9d6f-1535050c8cda" Dec 13 14:22:03.713310 env[1320]: time="2024-12-13T14:22:03.713257455Z" level=error msg="StopPodSandbox for \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\" failed" error="failed to destroy network for sandbox \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:03.713494 kubelet[2223]: E1213 14:22:03.713465 2223 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:03.713544 kubelet[2223]: E1213 14:22:03.713504 2223 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9"} Dec 13 14:22:03.713544 kubelet[2223]: E1213 14:22:03.713538 2223 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"edcf1038-965b-4103-930d-3cbf62798dd0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:22:03.713660 kubelet[2223]: E1213 14:22:03.713561 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"edcf1038-965b-4103-930d-3cbf62798dd0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8gfp4" podUID="edcf1038-965b-4103-930d-3cbf62798dd0" Dec 13 14:22:03.714929 env[1320]: time="2024-12-13T14:22:03.714882222Z" level=error msg="StopPodSandbox for \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\" failed" error="failed to destroy network for sandbox \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:03.715086 kubelet[2223]: E1213 14:22:03.715064 2223 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:03.715135 kubelet[2223]: E1213 14:22:03.715095 2223 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8"} Dec 13 14:22:03.715135 kubelet[2223]: E1213 14:22:03.715125 2223 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"08600e9c-2e7c-44be-b230-0c231e6c0b50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:22:03.715206 kubelet[2223]: E1213 14:22:03.715150 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"08600e9c-2e7c-44be-b230-0c231e6c0b50\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-bxn7n" podUID="08600e9c-2e7c-44be-b230-0c231e6c0b50" Dec 13 14:22:03.716616 env[1320]: time="2024-12-13T14:22:03.716559991Z" level=error msg="StopPodSandbox for \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\" failed" error="failed to destroy network for sandbox \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 14:22:03.716748 kubelet[2223]: E1213 14:22:03.716725 2223 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:03.716788 kubelet[2223]: E1213 14:22:03.716756 2223 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c"} Dec 13 14:22:03.716825 kubelet[2223]: E1213 14:22:03.716789 2223 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f9893a5a-eda8-409b-b506-579bb2498aa1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 14:22:03.716825 kubelet[2223]: E1213 14:22:03.716813 2223 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f9893a5a-eda8-409b-b506-579bb2498aa1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-hzwcj" podUID="f9893a5a-eda8-409b-b506-579bb2498aa1" Dec 13 14:22:06.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.138:22-10.0.0.1:60962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:06.069274 systemd[1]: Started sshd@8-10.0.0.138:22-10.0.0.1:60962.service. Dec 13 14:22:06.070075 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:22:06.070122 kernel: audit: type=1130 audit(1734099726.068:304): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.138:22-10.0.0.1:60962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:06.120000 audit[3385]: USER_ACCT pid=3385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:06.121392 sshd[3385]: Accepted publickey for core from 10.0.0.1 port 60962 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:06.124882 kernel: audit: type=1101 audit(1734099726.120:305): pid=3385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:06.124000 audit[3385]: CRED_ACQ pid=3385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:06.128272 sshd[3385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:06.129860 kernel: audit: type=1103 audit(1734099726.124:306): pid=3385 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:06.129936 kernel: audit: type=1006 audit(1734099726.124:307): pid=3385 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Dec 13 14:22:06.129965 kernel: audit: type=1300 audit(1734099726.124:307): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe999d2f0 a2=3 a3=1 items=0 ppid=1 pid=3385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:06.124000 audit[3385]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe999d2f0 a2=3 a3=1 items=0 ppid=1 pid=3385 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:06.124000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:06.133934 kernel: audit: type=1327 audit(1734099726.124:307): proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:06.136226 systemd-logind[1303]: New session 9 of user core. Dec 13 14:22:06.137103 systemd[1]: Started session-9.scope. Dec 13 14:22:06.141000 audit[3385]: USER_START pid=3385 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:06.142000 audit[3388]: CRED_ACQ pid=3388 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:06.148137 kernel: audit: type=1105 audit(1734099726.141:308): pid=3385 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:06.148213 kernel: audit: type=1103 audit(1734099726.142:309): pid=3388 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:06.271757 sshd[3385]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:06.271000 audit[3385]: USER_END pid=3385 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:06.274257 systemd[1]: sshd@8-10.0.0.138:22-10.0.0.1:60962.service: Deactivated successfully. Dec 13 14:22:06.275122 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 14:22:06.271000 audit[3385]: CRED_DISP pid=3385 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:06.279108 kernel: audit: type=1106 audit(1734099726.271:310): pid=3385 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:06.279208 kernel: audit: type=1104 audit(1734099726.271:311): pid=3385 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:06.279161 systemd-logind[1303]: Session 9 logged out. Waiting for processes to exit. Dec 13 14:22:06.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.0.138:22-10.0.0.1:60962 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:06.279909 systemd-logind[1303]: Removed session 9. Dec 13 14:22:07.431718 kubelet[2223]: I1213 14:22:07.431669 2223 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:22:07.432528 kubelet[2223]: E1213 14:22:07.432474 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:07.477000 audit[3401]: NETFILTER_CFG table=filter:95 family=2 entries=17 op=nft_register_rule pid=3401 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:07.477000 audit[3401]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffc86cf200 a2=0 a3=1 items=0 ppid=2387 pid=3401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:07.477000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:07.482000 audit[3401]: NETFILTER_CFG table=nat:96 family=2 entries=19 op=nft_register_chain pid=3401 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:07.482000 audit[3401]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffc86cf200 a2=0 a3=1 items=0 ppid=2387 pid=3401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:07.482000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:07.664750 kubelet[2223]: E1213 14:22:07.664722 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:08.088730 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3898293881.mount: Deactivated successfully. Dec 13 14:22:08.410530 env[1320]: time="2024-12-13T14:22:08.410315415Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:08.413036 env[1320]: time="2024-12-13T14:22:08.412983641Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:08.413778 env[1320]: time="2024-12-13T14:22:08.413737860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:08.415927 env[1320]: time="2024-12-13T14:22:08.415869112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:680b8c280812d12c035ca9f0deedea7c761afe0f1cc65109ea2f96bf63801758\"" Dec 13 14:22:08.416550 env[1320]: time="2024-12-13T14:22:08.416517368Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:08.428479 env[1320]: time="2024-12-13T14:22:08.428428063Z" level=info msg="CreateContainer within sandbox \"e66047ea3406a845229cb8b2a4470b9468e369202a62c9bf67655928c216c0d4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 14:22:08.439965 env[1320]: time="2024-12-13T14:22:08.439708622Z" level=info msg="CreateContainer within sandbox \"e66047ea3406a845229cb8b2a4470b9468e369202a62c9bf67655928c216c0d4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e3ba28cd5c8c3f857890f47728769909eeea790eb3fd19296df9e742688abe33\"" Dec 13 14:22:08.440291 env[1320]: time="2024-12-13T14:22:08.440247916Z" level=info msg="StartContainer for \"e3ba28cd5c8c3f857890f47728769909eeea790eb3fd19296df9e742688abe33\"" Dec 13 14:22:08.503340 env[1320]: time="2024-12-13T14:22:08.503296116Z" level=info msg="StartContainer for \"e3ba28cd5c8c3f857890f47728769909eeea790eb3fd19296df9e742688abe33\" returns successfully" Dec 13 14:22:08.651938 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 14:22:08.652055 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 14:22:08.669332 kubelet[2223]: E1213 14:22:08.668987 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:08.688913 kubelet[2223]: I1213 14:22:08.687539 2223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-zlvk7" podStartSLOduration=1.427444146 podStartE2EDuration="15.687492635s" podCreationTimestamp="2024-12-13 14:21:53 +0000 UTC" firstStartedPulling="2024-12-13 14:21:54.15609947 +0000 UTC m=+21.689041134" lastFinishedPulling="2024-12-13 14:22:08.416147919 +0000 UTC m=+35.949089623" observedRunningTime="2024-12-13 14:22:08.68487445 +0000 UTC m=+36.217816154" watchObservedRunningTime="2024-12-13 14:22:08.687492635 +0000 UTC m=+36.220434299" Dec 13 14:22:09.670888 kubelet[2223]: E1213 14:22:09.670500 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:09.685549 systemd[1]: run-containerd-runc-k8s.io-e3ba28cd5c8c3f857890f47728769909eeea790eb3fd19296df9e742688abe33-runc.gsBnuk.mount: Deactivated successfully. Dec 13 14:22:09.920000 audit[3556]: AVC avc: denied { write } for pid=3556 comm="tee" name="fd" dev="proc" ino=20547 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:22:09.920000 audit[3554]: AVC avc: denied { write } for pid=3554 comm="tee" name="fd" dev="proc" ino=19882 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:22:09.920000 audit[3554]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffdc3dea2e a2=241 a3=1b6 items=1 ppid=3525 pid=3554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:09.920000 audit: CWD cwd="/etc/service/enabled/confd/log" Dec 13 14:22:09.920000 audit: PATH item=0 name="/dev/fd/63" inode=20543 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:22:09.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:22:09.920000 audit[3556]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffffc27a2f a2=241 a3=1b6 items=1 ppid=3537 pid=3556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:09.920000 audit: CWD cwd="/etc/service/enabled/bird/log" Dec 13 14:22:09.920000 audit: PATH item=0 name="/dev/fd/63" inode=20544 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:22:09.920000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:22:09.926000 audit[3573]: AVC avc: denied { write } for pid=3573 comm="tee" name="fd" dev="proc" ino=19888 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:22:09.926000 audit[3573]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc2bb2a2e a2=241 a3=1b6 items=1 ppid=3524 pid=3573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:09.926000 audit: CWD cwd="/etc/service/enabled/bird6/log" Dec 13 14:22:09.926000 audit: PATH item=0 name="/dev/fd/63" inode=21576 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:22:09.926000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:22:09.939000 audit[3574]: AVC avc: denied { write } for pid=3574 comm="tee" name="fd" dev="proc" ino=18803 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:22:09.939000 audit[3574]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=fffffd5c5a1f a2=241 a3=1b6 items=1 ppid=3532 pid=3574 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:09.939000 audit: CWD cwd="/etc/service/enabled/node-status-reporter/log" Dec 13 14:22:09.939000 audit: PATH item=0 name="/dev/fd/63" inode=18792 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:22:09.939000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:22:09.962000 audit[3601]: AVC avc: denied { write } for pid=3601 comm="tee" name="fd" dev="proc" ino=20570 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:22:09.962000 audit[3601]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe98b3a30 a2=241 a3=1b6 items=1 ppid=3531 pid=3601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:09.962000 audit: CWD cwd="/etc/service/enabled/cni/log" Dec 13 14:22:09.962000 audit: PATH item=0 name="/dev/fd/63" inode=18805 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:22:09.962000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:22:09.964000 audit[3603]: AVC avc: denied { write } for pid=3603 comm="tee" name="fd" dev="proc" ino=20574 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:22:09.964000 audit[3603]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffe256ba1e a2=241 a3=1b6 items=1 ppid=3535 pid=3603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:09.964000 audit: CWD cwd="/etc/service/enabled/allocate-tunnel-addrs/log" Dec 13 14:22:09.964000 audit: PATH item=0 name="/dev/fd/63" inode=18806 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:22:09.964000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:22:09.965000 audit[3607]: AVC avc: denied { write } for pid=3607 comm="tee" name="fd" dev="proc" ino=20578 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=dir permissive=0 Dec 13 14:22:09.965000 audit[3607]: SYSCALL arch=c00000b7 syscall=56 success=yes exit=3 a0=ffffffffffffff9c a1=ffffc67eda2e a2=241 a3=1b6 items=1 ppid=3523 pid=3607 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="tee" exe="/usr/bin/coreutils" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:09.965000 audit: CWD cwd="/etc/service/enabled/felix/log" Dec 13 14:22:09.965000 audit: PATH item=0 name="/dev/fd/63" inode=18807 dev=00:0b mode=010600 ouid=0 ogid=0 rdev=00:00 obj=system_u:system_r:kernel_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 14:22:09.965000 audit: PROCTITLE proctitle=2F7573722F62696E2F636F72657574696C73002D2D636F72657574696C732D70726F672D73686562616E673D746565002F7573722F62696E2F746565002F6465762F66642F3633 Dec 13 14:22:10.091000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.091000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.091000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.091000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.091000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.091000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.091000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.091000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.091000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.091000 audit: BPF prog-id=10 op=LOAD Dec 13 14:22:10.091000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc19e8f18 a2=98 a3=ffffc19e8f08 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.091000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.092000 audit: BPF prog-id=10 op=UNLOAD Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit: BPF prog-id=11 op=LOAD Dec 13 14:22:10.092000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc19e8ba8 a2=74 a3=95 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.092000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.092000 audit: BPF prog-id=11 op=UNLOAD Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.092000 audit: BPF prog-id=12 op=LOAD Dec 13 14:22:10.092000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc19e8c08 a2=94 a3=2 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.092000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.092000 audit: BPF prog-id=12 op=UNLOAD Dec 13 14:22:10.180000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.180000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.180000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.180000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.180000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.180000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.180000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.180000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.180000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.180000 audit: BPF prog-id=13 op=LOAD Dec 13 14:22:10.180000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc19e8bc8 a2=40 a3=ffffc19e8bf8 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.180000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.180000 audit: BPF prog-id=13 op=UNLOAD Dec 13 14:22:10.180000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.180000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffc19e8ce0 a2=50 a3=0 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.180000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc19e8c38 a2=28 a3=ffffc19e8d68 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc19e8c68 a2=28 a3=ffffc19e8d98 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc19e8b18 a2=28 a3=ffffc19e8c48 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc19e8c88 a2=28 a3=ffffc19e8db8 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc19e8c68 a2=28 a3=ffffc19e8d98 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc19e8c58 a2=28 a3=ffffc19e8d88 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc19e8c88 a2=28 a3=ffffc19e8db8 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc19e8c68 a2=28 a3=ffffc19e8d98 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc19e8c88 a2=28 a3=ffffc19e8db8 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffc19e8c58 a2=28 a3=ffffc19e8d88 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffc19e8cd8 a2=28 a3=ffffc19e8e18 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc19e8a10 a2=50 a3=0 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.189000 audit: BPF prog-id=14 op=LOAD Dec 13 14:22:10.189000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc19e8a18 a2=94 a3=5 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.189000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.190000 audit: BPF prog-id=14 op=UNLOAD Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffc19e8b20 a2=50 a3=0 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.190000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffc19e8c68 a2=4 a3=3 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.190000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { confidentiality } for pid=3639 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:22:10.190000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc19e8c48 a2=94 a3=6 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.190000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { confidentiality } for pid=3639 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:22:10.190000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc19e8418 a2=94 a3=83 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.190000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { perfmon } for pid=3639 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { bpf } for pid=3639 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.190000 audit[3639]: AVC avc: denied { confidentiality } for pid=3639 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:22:10.190000 audit[3639]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffc19e8418 a2=94 a3=83 items=0 ppid=3527 pid=3639 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.190000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Dec 13 14:22:10.202000 audit[3642]: AVC avc: denied { bpf } for pid=3642 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.202000 audit[3642]: AVC avc: denied { bpf } for pid=3642 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.202000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.202000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.202000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.202000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.202000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.202000 audit[3642]: AVC avc: denied { bpf } for pid=3642 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.202000 audit[3642]: AVC avc: denied { bpf } for pid=3642 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.202000 audit: BPF prog-id=15 op=LOAD Dec 13 14:22:10.202000 audit[3642]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf158b38 a2=98 a3=ffffcf158b28 items=0 ppid=3527 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.202000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:22:10.203000 audit: BPF prog-id=15 op=UNLOAD Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { bpf } for pid=3642 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { bpf } for pid=3642 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { bpf } for pid=3642 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { bpf } for pid=3642 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit: BPF prog-id=16 op=LOAD Dec 13 14:22:10.203000 audit[3642]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf1589e8 a2=74 a3=95 items=0 ppid=3527 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.203000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:22:10.203000 audit: BPF prog-id=16 op=UNLOAD Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { bpf } for pid=3642 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { bpf } for pid=3642 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { perfmon } for pid=3642 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { bpf } for pid=3642 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit[3642]: AVC avc: denied { bpf } for pid=3642 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.203000 audit: BPF prog-id=17 op=LOAD Dec 13 14:22:10.203000 audit[3642]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf158a18 a2=40 a3=ffffcf158a48 items=0 ppid=3527 pid=3642 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.203000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Dec 13 14:22:10.203000 audit: BPF prog-id=17 op=UNLOAD Dec 13 14:22:10.261269 systemd-networkd[1096]: vxlan.calico: Link UP Dec 13 14:22:10.261280 systemd-networkd[1096]: vxlan.calico: Gained carrier Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit: BPF prog-id=18 op=LOAD Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec7eb998 a2=98 a3=ffffec7eb988 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit: BPF prog-id=18 op=UNLOAD Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit: BPF prog-id=19 op=LOAD Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec7eb678 a2=74 a3=95 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit: BPF prog-id=19 op=UNLOAD Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit: BPF prog-id=20 op=LOAD Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffec7eb6d8 a2=94 a3=2 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit: BPF prog-id=20 op=UNLOAD Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffec7eb708 a2=28 a3=ffffec7eb838 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffec7eb738 a2=28 a3=ffffec7eb868 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffec7eb5e8 a2=28 a3=ffffec7eb718 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffec7eb758 a2=28 a3=ffffec7eb888 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffec7eb738 a2=28 a3=ffffec7eb868 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffec7eb728 a2=28 a3=ffffec7eb858 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffec7eb758 a2=28 a3=ffffec7eb888 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffec7eb738 a2=28 a3=ffffec7eb868 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffec7eb758 a2=28 a3=ffffec7eb888 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffec7eb728 a2=28 a3=ffffec7eb858 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=12 a1=ffffec7eb7a8 a2=28 a3=ffffec7eb8e8 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.276000 audit: BPF prog-id=21 op=LOAD Dec 13 14:22:10.276000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffec7eb5c8 a2=40 a3=ffffec7eb5f8 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.276000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.277000 audit: BPF prog-id=21 op=UNLOAD Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=0 a1=ffffec7eb5f0 a2=50 a3=0 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=0 a1=ffffec7eb5f0 a2=50 a3=0 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit: BPF prog-id=22 op=LOAD Dec 13 14:22:10.277000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffec7ead58 a2=94 a3=2 items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.277000 audit: BPF prog-id=22 op=UNLOAD Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { perfmon } for pid=3669 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit[3669]: AVC avc: denied { bpf } for pid=3669 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.277000 audit: BPF prog-id=23 op=LOAD Dec 13 14:22:10.277000 audit[3669]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffec7eaee8 a2=94 a3=2d items=0 ppid=3527 pid=3669 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.277000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Dec 13 14:22:10.279000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.279000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.279000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.279000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.279000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.279000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.279000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.279000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.279000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.279000 audit: BPF prog-id=24 op=LOAD Dec 13 14:22:10.279000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcb0042c8 a2=98 a3=ffffcb0042b8 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.279000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.280000 audit: BPF prog-id=24 op=UNLOAD Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit: BPF prog-id=25 op=LOAD Dec 13 14:22:10.280000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcb003f58 a2=74 a3=95 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.280000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.280000 audit: BPF prog-id=25 op=UNLOAD Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.280000 audit: BPF prog-id=26 op=LOAD Dec 13 14:22:10.280000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcb003fb8 a2=94 a3=2 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.280000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.280000 audit: BPF prog-id=26 op=UNLOAD Dec 13 14:22:10.366000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.366000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.366000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.366000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.366000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.366000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.366000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.366000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.366000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.366000 audit: BPF prog-id=27 op=LOAD Dec 13 14:22:10.366000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffcb003f78 a2=40 a3=ffffcb003fa8 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.366000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.366000 audit: BPF prog-id=27 op=UNLOAD Dec 13 14:22:10.366000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.366000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=0 a1=ffffcb004090 a2=50 a3=0 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.366000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.374000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.374000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcb003fe8 a2=28 a3=ffffcb004118 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.374000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcb004018 a2=28 a3=ffffcb004148 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcb003ec8 a2=28 a3=ffffcb003ff8 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcb004038 a2=28 a3=ffffcb004168 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcb004018 a2=28 a3=ffffcb004148 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcb004008 a2=28 a3=ffffcb004138 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcb004038 a2=28 a3=ffffcb004168 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcb004018 a2=28 a3=ffffcb004148 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcb004038 a2=28 a3=ffffcb004168 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=12 a1=ffffcb004008 a2=28 a3=ffffcb004138 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=12 a1=ffffcb004088 a2=28 a3=ffffcb0041c8 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcb003dc0 a2=50 a3=0 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit: BPF prog-id=28 op=LOAD Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcb003dc8 a2=94 a3=5 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit: BPF prog-id=28 op=UNLOAD Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=0 a1=ffffcb003ed0 a2=50 a3=0 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=16 a1=ffffcb004018 a2=4 a3=3 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { confidentiality } for pid=3673 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcb003ff8 a2=94 a3=6 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { confidentiality } for pid=3673 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcb0037c8 a2=94 a3=83 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { perfmon } for pid=3673 comm="bpftool" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.375000 audit[3673]: AVC avc: denied { confidentiality } for pid=3673 comm="bpftool" lockdown_reason="use of bpf to read kernel RAM" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=0 Dec 13 14:22:10.375000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-22 a0=5 a1=ffffcb0037c8 a2=94 a3=83 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.375000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.376000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.376000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcb005208 a2=10 a3=ffffcb0052f8 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.376000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.376000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.376000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcb0050c8 a2=10 a3=ffffcb0051b8 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.376000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.376000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.376000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcb005038 a2=10 a3=ffffcb0051b8 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.376000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.376000 audit[3673]: AVC avc: denied { bpf } for pid=3673 comm="bpftool" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Dec 13 14:22:10.376000 audit[3673]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=ffffcb005038 a2=10 a3=ffffcb0051b8 items=0 ppid=3527 pid=3673 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.376000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Dec 13 14:22:10.386000 audit: BPF prog-id=23 op=UNLOAD Dec 13 14:22:10.429000 audit[3701]: NETFILTER_CFG table=mangle:97 family=2 entries=16 op=nft_register_chain pid=3701 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:22:10.429000 audit[3701]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffc13eeb60 a2=0 a3=ffffbe506fa8 items=0 ppid=3527 pid=3701 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.429000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:22:10.434000 audit[3703]: NETFILTER_CFG table=nat:98 family=2 entries=15 op=nft_register_chain pid=3703 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:22:10.435000 audit[3705]: NETFILTER_CFG table=filter:99 family=2 entries=39 op=nft_register_chain pid=3705 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:22:10.435000 audit[3705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=18968 a0=3 a1=ffffc7c52f40 a2=0 a3=ffff8ba6afa8 items=0 ppid=3527 pid=3705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.435000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:22:10.434000 audit[3703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffd00b28d0 a2=0 a3=ffff8275afa8 items=0 ppid=3527 pid=3703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.434000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:22:10.445000 audit[3702]: NETFILTER_CFG table=raw:100 family=2 entries=21 op=nft_register_chain pid=3702 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:22:10.445000 audit[3702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=fffffe0b6840 a2=0 a3=ffffb131afa8 items=0 ppid=3527 pid=3702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:10.445000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:22:10.672893 kubelet[2223]: E1213 14:22:10.672859 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:10.696327 systemd[1]: run-containerd-runc-k8s.io-e3ba28cd5c8c3f857890f47728769909eeea790eb3fd19296df9e742688abe33-runc.hfWOEd.mount: Deactivated successfully. Dec 13 14:22:11.274000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.138:22-10.0.0.1:60964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:11.275322 systemd[1]: Started sshd@9-10.0.0.138:22-10.0.0.1:60964.service. Dec 13 14:22:11.276248 kernel: kauditd_printk_skb: 522 callbacks suppressed Dec 13 14:22:11.276314 kernel: audit: type=1130 audit(1734099731.274:417): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.138:22-10.0.0.1:60964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:11.328000 audit[3737]: USER_ACCT pid=3737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.329983 sshd[3737]: Accepted publickey for core from 10.0.0.1 port 60964 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:11.330000 audit[3737]: CRED_ACQ pid=3737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.333221 sshd[3737]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:11.335974 kernel: audit: type=1101 audit(1734099731.328:418): pid=3737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.336023 kernel: audit: type=1103 audit(1734099731.330:419): pid=3737 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.336059 kernel: audit: type=1006 audit(1734099731.330:420): pid=3737 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Dec 13 14:22:11.338028 kernel: audit: type=1300 audit(1734099731.330:420): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff6e2ac50 a2=3 a3=1 items=0 ppid=1 pid=3737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:11.330000 audit[3737]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff6e2ac50 a2=3 a3=1 items=0 ppid=1 pid=3737 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:11.339195 systemd-logind[1303]: New session 10 of user core. Dec 13 14:22:11.339546 systemd[1]: Started session-10.scope. Dec 13 14:22:11.341415 kernel: audit: type=1327 audit(1734099731.330:420): proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:11.330000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:11.344000 audit[3737]: USER_START pid=3737 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.346000 audit[3740]: CRED_ACQ pid=3740 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.351975 kernel: audit: type=1105 audit(1734099731.344:421): pid=3737 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.352034 kernel: audit: type=1103 audit(1734099731.346:422): pid=3740 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.476000 audit[3737]: USER_END pid=3737 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.475992 sshd[3737]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:11.478291 systemd[1]: Started sshd@10-10.0.0.138:22-10.0.0.1:60970.service. Dec 13 14:22:11.480860 kernel: audit: type=1106 audit(1734099731.476:423): pid=3737 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.480902 kernel: audit: type=1104 audit(1734099731.476:424): pid=3737 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.476000 audit[3737]: CRED_DISP pid=3737 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.138:22-10.0.0.1:60970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:11.484043 systemd[1]: sshd@9-10.0.0.138:22-10.0.0.1:60964.service: Deactivated successfully. Dec 13 14:22:11.483000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.0.138:22-10.0.0.1:60964 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:11.484905 systemd-logind[1303]: Session 10 logged out. Waiting for processes to exit. Dec 13 14:22:11.484970 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 14:22:11.485689 systemd-logind[1303]: Removed session 10. Dec 13 14:22:11.522000 audit[3750]: USER_ACCT pid=3750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.523298 sshd[3750]: Accepted publickey for core from 10.0.0.1 port 60970 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:11.523953 systemd-networkd[1096]: vxlan.calico: Gained IPv6LL Dec 13 14:22:11.523000 audit[3750]: CRED_ACQ pid=3750 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.523000 audit[3750]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcaee4190 a2=3 a3=1 items=0 ppid=1 pid=3750 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:11.523000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:11.524914 sshd[3750]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:11.528568 systemd-logind[1303]: New session 11 of user core. Dec 13 14:22:11.528954 systemd[1]: Started session-11.scope. Dec 13 14:22:11.532000 audit[3750]: USER_START pid=3750 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.533000 audit[3755]: CRED_ACQ pid=3755 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.138:22-10.0.0.1:60984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:11.689248 systemd[1]: Started sshd@11-10.0.0.138:22-10.0.0.1:60984.service. Dec 13 14:22:11.690015 sshd[3750]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:11.691000 audit[3750]: USER_END pid=3750 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.692000 audit[3750]: CRED_DISP pid=3750 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.695902 systemd[1]: sshd@10-10.0.0.138:22-10.0.0.1:60970.service: Deactivated successfully. Dec 13 14:22:11.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.0.138:22-10.0.0.1:60970 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:11.698542 systemd-logind[1303]: Session 11 logged out. Waiting for processes to exit. Dec 13 14:22:11.698648 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 14:22:11.699287 systemd-logind[1303]: Removed session 11. Dec 13 14:22:11.747000 audit[3763]: USER_ACCT pid=3763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.749005 sshd[3763]: Accepted publickey for core from 10.0.0.1 port 60984 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:11.748000 audit[3763]: CRED_ACQ pid=3763 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.748000 audit[3763]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe5d27ad0 a2=3 a3=1 items=0 ppid=1 pid=3763 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:11.748000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:11.750089 sshd[3763]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:11.753301 systemd-logind[1303]: New session 12 of user core. Dec 13 14:22:11.754104 systemd[1]: Started session-12.scope. Dec 13 14:22:11.756000 audit[3763]: USER_START pid=3763 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.757000 audit[3768]: CRED_ACQ pid=3768 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.870905 sshd[3763]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:11.870000 audit[3763]: USER_END pid=3763 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.871000 audit[3763]: CRED_DISP pid=3763 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:11.873706 systemd[1]: sshd@11-10.0.0.138:22-10.0.0.1:60984.service: Deactivated successfully. Dec 13 14:22:11.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.0.138:22-10.0.0.1:60984 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:11.874610 systemd-logind[1303]: Session 12 logged out. Waiting for processes to exit. Dec 13 14:22:11.874671 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 14:22:11.875334 systemd-logind[1303]: Removed session 12. Dec 13 14:22:14.573909 env[1320]: time="2024-12-13T14:22:14.573823109Z" level=info msg="StopPodSandbox for \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\"" Dec 13 14:22:14.757588 env[1320]: 2024-12-13 14:22:14.655 [INFO][3802] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:14.757588 env[1320]: 2024-12-13 14:22:14.655 [INFO][3802] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" iface="eth0" netns="/var/run/netns/cni-c26a9c1d-c969-598e-f643-5e9e4b6faf97" Dec 13 14:22:14.757588 env[1320]: 2024-12-13 14:22:14.656 [INFO][3802] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" iface="eth0" netns="/var/run/netns/cni-c26a9c1d-c969-598e-f643-5e9e4b6faf97" Dec 13 14:22:14.757588 env[1320]: 2024-12-13 14:22:14.656 [INFO][3802] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" iface="eth0" netns="/var/run/netns/cni-c26a9c1d-c969-598e-f643-5e9e4b6faf97" Dec 13 14:22:14.757588 env[1320]: 2024-12-13 14:22:14.656 [INFO][3802] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:14.757588 env[1320]: 2024-12-13 14:22:14.657 [INFO][3802] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:14.757588 env[1320]: 2024-12-13 14:22:14.741 [INFO][3810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" HandleID="k8s-pod-network.9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Workload="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:14.757588 env[1320]: 2024-12-13 14:22:14.741 [INFO][3810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:14.757588 env[1320]: 2024-12-13 14:22:14.742 [INFO][3810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:14.757588 env[1320]: 2024-12-13 14:22:14.752 [WARNING][3810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" HandleID="k8s-pod-network.9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Workload="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:14.757588 env[1320]: 2024-12-13 14:22:14.752 [INFO][3810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" HandleID="k8s-pod-network.9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Workload="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:14.757588 env[1320]: 2024-12-13 14:22:14.753 [INFO][3810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:14.757588 env[1320]: 2024-12-13 14:22:14.755 [INFO][3802] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:14.761395 env[1320]: time="2024-12-13T14:22:14.759852816Z" level=info msg="TearDown network for sandbox \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\" successfully" Dec 13 14:22:14.761395 env[1320]: time="2024-12-13T14:22:14.759892617Z" level=info msg="StopPodSandbox for \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\" returns successfully" Dec 13 14:22:14.761521 kubelet[2223]: E1213 14:22:14.760250 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:14.759815 systemd[1]: run-netns-cni\x2dc26a9c1d\x2dc969\x2d598e\x2df643\x2d5e9e4b6faf97.mount: Deactivated successfully. Dec 13 14:22:14.762072 env[1320]: time="2024-12-13T14:22:14.761548012Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bxn7n,Uid:08600e9c-2e7c-44be-b230-0c231e6c0b50,Namespace:kube-system,Attempt:1,}" Dec 13 14:22:14.875799 systemd-networkd[1096]: cali938c225f7bd: Link UP Dec 13 14:22:14.877571 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:22:14.877674 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali938c225f7bd: link becomes ready Dec 13 14:22:14.877687 systemd-networkd[1096]: cali938c225f7bd: Gained carrier Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.808 [INFO][3818] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--bxn7n-eth0 coredns-76f75df574- kube-system 08600e9c-2e7c-44be-b230-0c231e6c0b50 882 0 2024-12-13 14:21:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-bxn7n eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali938c225f7bd [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" Namespace="kube-system" Pod="coredns-76f75df574-bxn7n" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bxn7n-" Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.808 [INFO][3818] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" Namespace="kube-system" Pod="coredns-76f75df574-bxn7n" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.836 [INFO][3831] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" HandleID="k8s-pod-network.57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" Workload="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.847 [INFO][3831] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" HandleID="k8s-pod-network.57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" Workload="localhost-k8s-coredns--76f75df574--bxn7n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003a8eb0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-bxn7n", "timestamp":"2024-12-13 14:22:14.836462745 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.847 [INFO][3831] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.847 [INFO][3831] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.847 [INFO][3831] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.848 [INFO][3831] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" host="localhost" Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.853 [INFO][3831] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.857 [INFO][3831] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.859 [INFO][3831] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.860 [INFO][3831] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.861 [INFO][3831] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" host="localhost" Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.862 [INFO][3831] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11 Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.865 [INFO][3831] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" host="localhost" Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.870 [INFO][3831] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" host="localhost" Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.870 [INFO][3831] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" host="localhost" Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.870 [INFO][3831] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:14.893943 env[1320]: 2024-12-13 14:22:14.870 [INFO][3831] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" HandleID="k8s-pod-network.57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" Workload="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:14.894486 env[1320]: 2024-12-13 14:22:14.873 [INFO][3818] cni-plugin/k8s.go 386: Populated endpoint ContainerID="57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" Namespace="kube-system" Pod="coredns-76f75df574-bxn7n" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bxn7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bxn7n-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"08600e9c-2e7c-44be-b230-0c231e6c0b50", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-bxn7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali938c225f7bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:14.894486 env[1320]: 2024-12-13 14:22:14.873 [INFO][3818] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" Namespace="kube-system" Pod="coredns-76f75df574-bxn7n" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:14.894486 env[1320]: 2024-12-13 14:22:14.873 [INFO][3818] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali938c225f7bd ContainerID="57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" Namespace="kube-system" Pod="coredns-76f75df574-bxn7n" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:14.894486 env[1320]: 2024-12-13 14:22:14.878 [INFO][3818] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" Namespace="kube-system" Pod="coredns-76f75df574-bxn7n" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:14.894486 env[1320]: 2024-12-13 14:22:14.878 [INFO][3818] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" Namespace="kube-system" Pod="coredns-76f75df574-bxn7n" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bxn7n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bxn7n-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"08600e9c-2e7c-44be-b230-0c231e6c0b50", ResourceVersion:"882", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11", Pod:"coredns-76f75df574-bxn7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali938c225f7bd", MAC:"06:a5:be:1f:b8:ef", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:14.894486 env[1320]: 2024-12-13 14:22:14.887 [INFO][3818] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11" Namespace="kube-system" Pod="coredns-76f75df574-bxn7n" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:14.896000 audit[3845]: NETFILTER_CFG table=filter:101 family=2 entries=34 op=nft_register_chain pid=3845 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:22:14.896000 audit[3845]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19148 a0=3 a1=ffffe6e7c9d0 a2=0 a3=ffffb1fb1fa8 items=0 ppid=3527 pid=3845 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:14.896000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:22:14.905398 env[1320]: time="2024-12-13T14:22:14.905255750Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:22:14.905522 env[1320]: time="2024-12-13T14:22:14.905497555Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:22:14.905629 env[1320]: time="2024-12-13T14:22:14.905601157Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:22:14.905921 env[1320]: time="2024-12-13T14:22:14.905839242Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11 pid=3863 runtime=io.containerd.runc.v2 Dec 13 14:22:14.936734 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:22:14.954030 env[1320]: time="2024-12-13T14:22:14.953995334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-bxn7n,Uid:08600e9c-2e7c-44be-b230-0c231e6c0b50,Namespace:kube-system,Attempt:1,} returns sandbox id \"57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11\"" Dec 13 14:22:14.954886 kubelet[2223]: E1213 14:22:14.954713 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:14.959096 env[1320]: time="2024-12-13T14:22:14.959046440Z" level=info msg="CreateContainer within sandbox \"57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:22:14.969331 env[1320]: time="2024-12-13T14:22:14.969293015Z" level=info msg="CreateContainer within sandbox \"57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"0c4805a18d1dded2508b02a2aa5dad40eb91e766003fdc904234cb4f8475536d\"" Dec 13 14:22:14.970666 env[1320]: time="2024-12-13T14:22:14.969935829Z" level=info msg="StartContainer for \"0c4805a18d1dded2508b02a2aa5dad40eb91e766003fdc904234cb4f8475536d\"" Dec 13 14:22:15.017755 env[1320]: time="2024-12-13T14:22:15.017692024Z" level=info msg="StartContainer for \"0c4805a18d1dded2508b02a2aa5dad40eb91e766003fdc904234cb4f8475536d\" returns successfully" Dec 13 14:22:15.573296 env[1320]: time="2024-12-13T14:22:15.573253818Z" level=info msg="StopPodSandbox for \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\"" Dec 13 14:22:15.650712 env[1320]: 2024-12-13 14:22:15.614 [INFO][3957] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:15.650712 env[1320]: 2024-12-13 14:22:15.614 [INFO][3957] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" iface="eth0" netns="/var/run/netns/cni-59ef6f62-0ed2-f3fd-c232-ae27e8f80917" Dec 13 14:22:15.650712 env[1320]: 2024-12-13 14:22:15.614 [INFO][3957] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" iface="eth0" netns="/var/run/netns/cni-59ef6f62-0ed2-f3fd-c232-ae27e8f80917" Dec 13 14:22:15.650712 env[1320]: 2024-12-13 14:22:15.614 [INFO][3957] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" iface="eth0" netns="/var/run/netns/cni-59ef6f62-0ed2-f3fd-c232-ae27e8f80917" Dec 13 14:22:15.650712 env[1320]: 2024-12-13 14:22:15.614 [INFO][3957] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:15.650712 env[1320]: 2024-12-13 14:22:15.615 [INFO][3957] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:15.650712 env[1320]: 2024-12-13 14:22:15.632 [INFO][3965] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" HandleID="k8s-pod-network.561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Workload="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:15.650712 env[1320]: 2024-12-13 14:22:15.633 [INFO][3965] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:15.650712 env[1320]: 2024-12-13 14:22:15.633 [INFO][3965] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:15.650712 env[1320]: 2024-12-13 14:22:15.645 [WARNING][3965] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" HandleID="k8s-pod-network.561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Workload="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:15.650712 env[1320]: 2024-12-13 14:22:15.645 [INFO][3965] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" HandleID="k8s-pod-network.561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Workload="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:15.650712 env[1320]: 2024-12-13 14:22:15.646 [INFO][3965] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:15.650712 env[1320]: 2024-12-13 14:22:15.648 [INFO][3957] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:15.651868 env[1320]: time="2024-12-13T14:22:15.651689667Z" level=info msg="TearDown network for sandbox \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\" successfully" Dec 13 14:22:15.651970 env[1320]: time="2024-12-13T14:22:15.651950992Z" level=info msg="StopPodSandbox for \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\" returns successfully" Dec 13 14:22:15.653317 env[1320]: time="2024-12-13T14:22:15.653277299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7778b75b4d-fztb7,Uid:accb085d-f789-4c9c-a736-d74c6e73b549,Namespace:calico-system,Attempt:1,}" Dec 13 14:22:15.685912 kubelet[2223]: E1213 14:22:15.685823 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:15.722868 kubelet[2223]: I1213 14:22:15.709237 2223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-bxn7n" podStartSLOduration=28.709184246 podStartE2EDuration="28.709184246s" podCreationTimestamp="2024-12-13 14:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:22:15.708583233 +0000 UTC m=+43.241524977" watchObservedRunningTime="2024-12-13 14:22:15.709184246 +0000 UTC m=+43.242125950" Dec 13 14:22:15.747000 audit[3993]: NETFILTER_CFG table=filter:102 family=2 entries=16 op=nft_register_rule pid=3993 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:15.747000 audit[3993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5908 a0=3 a1=ffffea7eeb70 a2=0 a3=1 items=0 ppid=2387 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:15.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:15.753000 audit[3993]: NETFILTER_CFG table=nat:103 family=2 entries=14 op=nft_register_rule pid=3993 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:15.753000 audit[3993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffea7eeb70 a2=0 a3=1 items=0 ppid=2387 pid=3993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:15.753000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:15.761377 systemd[1]: run-netns-cni\x2d59ef6f62\x2d0ed2\x2df3fd\x2dc232\x2dae27e8f80917.mount: Deactivated successfully. Dec 13 14:22:15.769000 audit[3997]: NETFILTER_CFG table=filter:104 family=2 entries=13 op=nft_register_rule pid=3997 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:15.769000 audit[3997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffc2104c60 a2=0 a3=1 items=0 ppid=2387 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:15.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:15.777000 audit[3997]: NETFILTER_CFG table=nat:105 family=2 entries=35 op=nft_register_chain pid=3997 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:15.777000 audit[3997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffc2104c60 a2=0 a3=1 items=0 ppid=2387 pid=3997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:15.777000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:15.809504 systemd-networkd[1096]: caliaa663756530: Link UP Dec 13 14:22:15.810887 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliaa663756530: link becomes ready Dec 13 14:22:15.810797 systemd-networkd[1096]: caliaa663756530: Gained carrier Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.693 [INFO][3973] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0 calico-kube-controllers-7778b75b4d- calico-system accb085d-f789-4c9c-a736-d74c6e73b549 895 0 2024-12-13 14:21:53 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7778b75b4d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-7778b75b4d-fztb7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] caliaa663756530 [] []}} ContainerID="5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" Namespace="calico-system" Pod="calico-kube-controllers-7778b75b4d-fztb7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-" Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.693 [INFO][3973] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" Namespace="calico-system" Pod="calico-kube-controllers-7778b75b4d-fztb7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.770 [INFO][3988] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" HandleID="k8s-pod-network.5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" Workload="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.781 [INFO][3988] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" HandleID="k8s-pod-network.5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" Workload="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40005990c0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-7778b75b4d-fztb7", "timestamp":"2024-12-13 14:22:15.770138936 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.781 [INFO][3988] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.781 [INFO][3988] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.781 [INFO][3988] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.783 [INFO][3988] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" host="localhost" Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.786 [INFO][3988] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.790 [INFO][3988] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.791 [INFO][3988] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.793 [INFO][3988] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.793 [INFO][3988] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" host="localhost" Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.795 [INFO][3988] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.798 [INFO][3988] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" host="localhost" Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.803 [INFO][3988] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" host="localhost" Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.803 [INFO][3988] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" host="localhost" Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.803 [INFO][3988] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:15.824496 env[1320]: 2024-12-13 14:22:15.803 [INFO][3988] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" HandleID="k8s-pod-network.5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" Workload="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:15.825703 env[1320]: 2024-12-13 14:22:15.806 [INFO][3973] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" Namespace="calico-system" Pod="calico-kube-controllers-7778b75b4d-fztb7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0", GenerateName:"calico-kube-controllers-7778b75b4d-", Namespace:"calico-system", SelfLink:"", UID:"accb085d-f789-4c9c-a736-d74c6e73b549", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7778b75b4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-7778b75b4d-fztb7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa663756530", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:15.825703 env[1320]: 2024-12-13 14:22:15.806 [INFO][3973] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" Namespace="calico-system" Pod="calico-kube-controllers-7778b75b4d-fztb7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:15.825703 env[1320]: 2024-12-13 14:22:15.806 [INFO][3973] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa663756530 ContainerID="5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" Namespace="calico-system" Pod="calico-kube-controllers-7778b75b4d-fztb7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:15.825703 env[1320]: 2024-12-13 14:22:15.810 [INFO][3973] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" Namespace="calico-system" Pod="calico-kube-controllers-7778b75b4d-fztb7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:15.825703 env[1320]: 2024-12-13 14:22:15.811 [INFO][3973] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" Namespace="calico-system" Pod="calico-kube-controllers-7778b75b4d-fztb7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0", GenerateName:"calico-kube-controllers-7778b75b4d-", Namespace:"calico-system", SelfLink:"", UID:"accb085d-f789-4c9c-a736-d74c6e73b549", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7778b75b4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c", Pod:"calico-kube-controllers-7778b75b4d-fztb7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa663756530", MAC:"06:67:c3:9c:d4:cc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:15.825703 env[1320]: 2024-12-13 14:22:15.821 [INFO][3973] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c" Namespace="calico-system" Pod="calico-kube-controllers-7778b75b4d-fztb7" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:15.830000 audit[4010]: NETFILTER_CFG table=filter:106 family=2 entries=38 op=nft_register_chain pid=4010 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:22:15.830000 audit[4010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=20336 a0=3 a1=fffff9c17170 a2=0 a3=ffffbbc0efa8 items=0 ppid=3527 pid=4010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:15.830000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:22:15.842481 env[1320]: time="2024-12-13T14:22:15.842414618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:22:15.842481 env[1320]: time="2024-12-13T14:22:15.842469939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:22:15.842605 env[1320]: time="2024-12-13T14:22:15.842481260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:22:15.842888 env[1320]: time="2024-12-13T14:22:15.842668223Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c pid=4020 runtime=io.containerd.runc.v2 Dec 13 14:22:15.877995 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:22:15.894391 env[1320]: time="2024-12-13T14:22:15.894354523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7778b75b4d-fztb7,Uid:accb085d-f789-4c9c-a736-d74c6e73b549,Namespace:calico-system,Attempt:1,} returns sandbox id \"5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c\"" Dec 13 14:22:15.895745 env[1320]: time="2024-12-13T14:22:15.895716911Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 14:22:16.573745 env[1320]: time="2024-12-13T14:22:16.573517827Z" level=info msg="StopPodSandbox for \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\"" Dec 13 14:22:16.649148 env[1320]: 2024-12-13 14:22:16.614 [INFO][4070] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:16.649148 env[1320]: 2024-12-13 14:22:16.614 [INFO][4070] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" iface="eth0" netns="/var/run/netns/cni-7c0bfe93-6b48-1044-c5ec-01f0e8624218" Dec 13 14:22:16.649148 env[1320]: 2024-12-13 14:22:16.615 [INFO][4070] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" iface="eth0" netns="/var/run/netns/cni-7c0bfe93-6b48-1044-c5ec-01f0e8624218" Dec 13 14:22:16.649148 env[1320]: 2024-12-13 14:22:16.615 [INFO][4070] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" iface="eth0" netns="/var/run/netns/cni-7c0bfe93-6b48-1044-c5ec-01f0e8624218" Dec 13 14:22:16.649148 env[1320]: 2024-12-13 14:22:16.615 [INFO][4070] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:16.649148 env[1320]: 2024-12-13 14:22:16.615 [INFO][4070] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:16.649148 env[1320]: 2024-12-13 14:22:16.635 [INFO][4078] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" HandleID="k8s-pod-network.3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Workload="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:16.649148 env[1320]: 2024-12-13 14:22:16.635 [INFO][4078] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:16.649148 env[1320]: 2024-12-13 14:22:16.635 [INFO][4078] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:16.649148 env[1320]: 2024-12-13 14:22:16.644 [WARNING][4078] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" HandleID="k8s-pod-network.3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Workload="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:16.649148 env[1320]: 2024-12-13 14:22:16.644 [INFO][4078] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" HandleID="k8s-pod-network.3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Workload="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:16.649148 env[1320]: 2024-12-13 14:22:16.645 [INFO][4078] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:16.649148 env[1320]: 2024-12-13 14:22:16.647 [INFO][4070] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:16.654257 env[1320]: time="2024-12-13T14:22:16.652384168Z" level=info msg="TearDown network for sandbox \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\" successfully" Dec 13 14:22:16.654257 env[1320]: time="2024-12-13T14:22:16.652415249Z" level=info msg="StopPodSandbox for \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\" returns successfully" Dec 13 14:22:16.654257 env[1320]: time="2024-12-13T14:22:16.653813477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hzwcj,Uid:f9893a5a-eda8-409b-b506-579bb2498aa1,Namespace:kube-system,Attempt:1,}" Dec 13 14:22:16.654418 kubelet[2223]: E1213 14:22:16.652869 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:16.651523 systemd[1]: run-netns-cni\x2d7c0bfe93\x2d6b48\x2d1044\x2dc5ec\x2d01f0e8624218.mount: Deactivated successfully. Dec 13 14:22:16.688406 kubelet[2223]: E1213 14:22:16.688369 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:16.762965 systemd-networkd[1096]: caliaccefbfdedf: Link UP Dec 13 14:22:16.764990 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:22:16.765078 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): caliaccefbfdedf: link becomes ready Dec 13 14:22:16.765132 systemd-networkd[1096]: caliaccefbfdedf: Gained carrier Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.700 [INFO][4087] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--76f75df574--hzwcj-eth0 coredns-76f75df574- kube-system f9893a5a-eda8-409b-b506-579bb2498aa1 914 0 2024-12-13 14:21:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-76f75df574-hzwcj eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaccefbfdedf [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" Namespace="kube-system" Pod="coredns-76f75df574-hzwcj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hzwcj-" Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.700 [INFO][4087] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" Namespace="kube-system" Pod="coredns-76f75df574-hzwcj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.722 [INFO][4100] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" HandleID="k8s-pod-network.f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" Workload="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.733 [INFO][4100] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" HandleID="k8s-pod-network.f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" Workload="localhost-k8s-coredns--76f75df574--hzwcj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000279050), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-76f75df574-hzwcj", "timestamp":"2024-12-13 14:22:16.722692577 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.733 [INFO][4100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.733 [INFO][4100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.733 [INFO][4100] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.735 [INFO][4100] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" host="localhost" Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.738 [INFO][4100] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.742 [INFO][4100] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.743 [INFO][4100] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.747 [INFO][4100] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.747 [INFO][4100] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" host="localhost" Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.748 [INFO][4100] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018 Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.751 [INFO][4100] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" host="localhost" Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.756 [INFO][4100] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" host="localhost" Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.756 [INFO][4100] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" host="localhost" Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.757 [INFO][4100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:16.776114 env[1320]: 2024-12-13 14:22:16.757 [INFO][4100] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" HandleID="k8s-pod-network.f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" Workload="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:16.776659 env[1320]: 2024-12-13 14:22:16.761 [INFO][4087] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" Namespace="kube-system" Pod="coredns-76f75df574-hzwcj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hzwcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hzwcj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f9893a5a-eda8-409b-b506-579bb2498aa1", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-76f75df574-hzwcj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaccefbfdedf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:16.776659 env[1320]: 2024-12-13 14:22:16.761 [INFO][4087] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" Namespace="kube-system" Pod="coredns-76f75df574-hzwcj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:16.776659 env[1320]: 2024-12-13 14:22:16.761 [INFO][4087] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaccefbfdedf ContainerID="f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" Namespace="kube-system" Pod="coredns-76f75df574-hzwcj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:16.776659 env[1320]: 2024-12-13 14:22:16.765 [INFO][4087] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" Namespace="kube-system" Pod="coredns-76f75df574-hzwcj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:16.776659 env[1320]: 2024-12-13 14:22:16.765 [INFO][4087] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" Namespace="kube-system" Pod="coredns-76f75df574-hzwcj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hzwcj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hzwcj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f9893a5a-eda8-409b-b506-579bb2498aa1", ResourceVersion:"914", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018", Pod:"coredns-76f75df574-hzwcj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaccefbfdedf", MAC:"ee:b7:9a:5d:29:e0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:16.776659 env[1320]: 2024-12-13 14:22:16.774 [INFO][4087] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018" Namespace="kube-system" Pod="coredns-76f75df574-hzwcj" WorkloadEndpoint="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:16.787869 kernel: kauditd_printk_skb: 41 callbacks suppressed Dec 13 14:22:16.787957 kernel: audit: type=1325 audit(1734099736.783:450): table=filter:107 family=2 entries=40 op=nft_register_chain pid=4126 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:22:16.787981 kernel: audit: type=1300 audit(1734099736.783:450): arch=c00000b7 syscall=211 success=yes exit=21072 a0=3 a1=ffffe0032540 a2=0 a3=ffffb544dfa8 items=0 ppid=3527 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:16.783000 audit[4126]: NETFILTER_CFG table=filter:107 family=2 entries=40 op=nft_register_chain pid=4126 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:22:16.783000 audit[4126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21072 a0=3 a1=ffffe0032540 a2=0 a3=ffffb544dfa8 items=0 ppid=3527 pid=4126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:16.789552 env[1320]: time="2024-12-13T14:22:16.789493756Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:22:16.789699 env[1320]: time="2024-12-13T14:22:16.789675040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:22:16.789803 env[1320]: time="2024-12-13T14:22:16.789780442Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:22:16.790050 env[1320]: time="2024-12-13T14:22:16.790021607Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018 pid=4130 runtime=io.containerd.runc.v2 Dec 13 14:22:16.783000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:22:16.793435 kernel: audit: type=1327 audit(1734099736.783:450): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:22:16.833962 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:22:16.856041 env[1320]: time="2024-12-13T14:22:16.855999290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-hzwcj,Uid:f9893a5a-eda8-409b-b506-579bb2498aa1,Namespace:kube-system,Attempt:1,} returns sandbox id \"f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018\"" Dec 13 14:22:16.857070 kubelet[2223]: E1213 14:22:16.856901 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:16.858531 env[1320]: time="2024-12-13T14:22:16.858493020Z" level=info msg="CreateContainer within sandbox \"f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 14:22:16.873192 env[1320]: time="2024-12-13T14:22:16.873160954Z" level=info msg="CreateContainer within sandbox \"f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"23a479e78ca3fcd6c18e996101d4d27e9bc368634461f054328ca39af8197987\"" Dec 13 14:22:16.873668 env[1320]: time="2024-12-13T14:22:16.873635163Z" level=info msg="StartContainer for \"23a479e78ca3fcd6c18e996101d4d27e9bc368634461f054328ca39af8197987\"" Dec 13 14:22:16.874133 systemd[1]: Started sshd@12-10.0.0.138:22-10.0.0.1:44700.service. Dec 13 14:22:16.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.138:22-10.0.0.1:44700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:16.877916 kernel: audit: type=1130 audit(1734099736.873:451): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.138:22-10.0.0.1:44700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:16.901669 systemd-networkd[1096]: cali938c225f7bd: Gained IPv6LL Dec 13 14:22:16.928663 env[1320]: time="2024-12-13T14:22:16.927156116Z" level=info msg="StartContainer for \"23a479e78ca3fcd6c18e996101d4d27e9bc368634461f054328ca39af8197987\" returns successfully" Dec 13 14:22:16.934924 sshd[4164]: Accepted publickey for core from 10.0.0.1 port 44700 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:16.933000 audit[4164]: USER_ACCT pid=4164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:16.940072 sshd[4164]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:16.935000 audit[4164]: CRED_ACQ pid=4164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:16.943786 kernel: audit: type=1101 audit(1734099736.933:452): pid=4164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:16.943874 kernel: audit: type=1103 audit(1734099736.935:453): pid=4164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:16.946212 kernel: audit: type=1006 audit(1734099736.935:454): pid=4164 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Dec 13 14:22:16.946276 kernel: audit: type=1300 audit(1734099736.935:454): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc6727ee0 a2=3 a3=1 items=0 ppid=1 pid=4164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:16.935000 audit[4164]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc6727ee0 a2=3 a3=1 items=0 ppid=1 pid=4164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:16.935000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:16.949995 systemd[1]: Started session-13.scope. Dec 13 14:22:16.950063 systemd-logind[1303]: New session 13 of user core. Dec 13 14:22:16.951086 kernel: audit: type=1327 audit(1734099736.935:454): proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:16.954000 audit[4164]: USER_START pid=4164 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:16.957000 audit[4204]: CRED_ACQ pid=4204 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:16.960882 kernel: audit: type=1105 audit(1734099736.954:455): pid=4164 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:17.123792 sshd[4164]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:17.123000 audit[4164]: USER_END pid=4164 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:17.124000 audit[4164]: CRED_DISP pid=4164 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:17.126323 systemd[1]: sshd@12-10.0.0.138:22-10.0.0.1:44700.service: Deactivated successfully. Dec 13 14:22:17.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.0.138:22-10.0.0.1:44700 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:17.127397 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 14:22:17.127710 systemd-logind[1303]: Session 13 logged out. Waiting for processes to exit. Dec 13 14:22:17.128591 systemd-logind[1303]: Removed session 13. Dec 13 14:22:17.156173 systemd-networkd[1096]: caliaa663756530: Gained IPv6LL Dec 13 14:22:17.573441 env[1320]: time="2024-12-13T14:22:17.572998733Z" level=info msg="StopPodSandbox for \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\"" Dec 13 14:22:17.647785 env[1320]: 2024-12-13 14:22:17.616 [INFO][4234] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:17.647785 env[1320]: 2024-12-13 14:22:17.616 [INFO][4234] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" iface="eth0" netns="/var/run/netns/cni-5080dc57-f0fb-d883-8719-169b9fbf2cb6" Dec 13 14:22:17.647785 env[1320]: 2024-12-13 14:22:17.616 [INFO][4234] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" iface="eth0" netns="/var/run/netns/cni-5080dc57-f0fb-d883-8719-169b9fbf2cb6" Dec 13 14:22:17.647785 env[1320]: 2024-12-13 14:22:17.616 [INFO][4234] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" iface="eth0" netns="/var/run/netns/cni-5080dc57-f0fb-d883-8719-169b9fbf2cb6" Dec 13 14:22:17.647785 env[1320]: 2024-12-13 14:22:17.616 [INFO][4234] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:17.647785 env[1320]: 2024-12-13 14:22:17.616 [INFO][4234] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:17.647785 env[1320]: 2024-12-13 14:22:17.634 [INFO][4242] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" HandleID="k8s-pod-network.cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Workload="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:17.647785 env[1320]: 2024-12-13 14:22:17.634 [INFO][4242] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:17.647785 env[1320]: 2024-12-13 14:22:17.634 [INFO][4242] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:17.647785 env[1320]: 2024-12-13 14:22:17.643 [WARNING][4242] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" HandleID="k8s-pod-network.cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Workload="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:17.647785 env[1320]: 2024-12-13 14:22:17.643 [INFO][4242] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" HandleID="k8s-pod-network.cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Workload="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:17.647785 env[1320]: 2024-12-13 14:22:17.644 [INFO][4242] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:17.647785 env[1320]: 2024-12-13 14:22:17.646 [INFO][4234] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:17.648319 env[1320]: time="2024-12-13T14:22:17.647925003Z" level=info msg="TearDown network for sandbox \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\" successfully" Dec 13 14:22:17.648319 env[1320]: time="2024-12-13T14:22:17.647957243Z" level=info msg="StopPodSandbox for \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\" returns successfully" Dec 13 14:22:17.648531 env[1320]: time="2024-12-13T14:22:17.648493774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8gfp4,Uid:edcf1038-965b-4103-930d-3cbf62798dd0,Namespace:calico-system,Attempt:1,}" Dec 13 14:22:17.680389 env[1320]: time="2024-12-13T14:22:17.680345158Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:17.682594 env[1320]: time="2024-12-13T14:22:17.682561802Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:17.683902 env[1320]: time="2024-12-13T14:22:17.683876268Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/kube-controllers:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:17.685039 env[1320]: time="2024-12-13T14:22:17.685004770Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:17.685615 env[1320]: time="2024-12-13T14:22:17.685577061Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:32c335fdb9d757e7ba6a76a9cfa8d292a5a229101ae7ea37b42f53c28adf2db1\"" Dec 13 14:22:17.693531 kubelet[2223]: E1213 14:22:17.692920 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:17.694495 kubelet[2223]: E1213 14:22:17.694382 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:17.695994 env[1320]: time="2024-12-13T14:22:17.694671919Z" level=info msg="CreateContainer within sandbox \"5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 14:22:17.703455 kubelet[2223]: I1213 14:22:17.702171 2223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-hzwcj" podStartSLOduration=30.702138386 podStartE2EDuration="30.702138386s" podCreationTimestamp="2024-12-13 14:21:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 14:22:17.701875421 +0000 UTC m=+45.234817125" watchObservedRunningTime="2024-12-13 14:22:17.702138386 +0000 UTC m=+45.235080090" Dec 13 14:22:17.711000 audit[4268]: NETFILTER_CFG table=filter:108 family=2 entries=10 op=nft_register_rule pid=4268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:17.711000 audit[4268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffdc520410 a2=0 a3=1 items=0 ppid=2387 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:17.711000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:17.718000 audit[4268]: NETFILTER_CFG table=nat:109 family=2 entries=44 op=nft_register_rule pid=4268 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:17.718000 audit[4268]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffdc520410 a2=0 a3=1 items=0 ppid=2387 pid=4268 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:17.718000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:17.727609 env[1320]: time="2024-12-13T14:22:17.727570764Z" level=info msg="CreateContainer within sandbox \"5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"5e6652d8f683c6b209c57711d89ff86bda9e15ff2626348f322aa229ec76bcb4\"" Dec 13 14:22:17.728569 env[1320]: time="2024-12-13T14:22:17.728539463Z" level=info msg="StartContainer for \"5e6652d8f683c6b209c57711d89ff86bda9e15ff2626348f322aa229ec76bcb4\"" Dec 13 14:22:17.737000 audit[4279]: NETFILTER_CFG table=filter:110 family=2 entries=10 op=nft_register_rule pid=4279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:17.737000 audit[4279]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffd90de270 a2=0 a3=1 items=0 ppid=2387 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:17.737000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:17.747000 audit[4279]: NETFILTER_CFG table=nat:111 family=2 entries=56 op=nft_register_chain pid=4279 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:17.747000 audit[4279]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffd90de270 a2=0 a3=1 items=0 ppid=2387 pid=4279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:17.747000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:17.764066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2905717533.mount: Deactivated successfully. Dec 13 14:22:17.764204 systemd[1]: run-netns-cni\x2d5080dc57\x2df0fb\x2dd883\x2d8719\x2d169b9fbf2cb6.mount: Deactivated successfully. Dec 13 14:22:17.821426 systemd-networkd[1096]: cali3b46c960816: Link UP Dec 13 14:22:17.823810 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:22:17.823942 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali3b46c960816: link becomes ready Dec 13 14:22:17.823815 systemd-networkd[1096]: cali3b46c960816: Gained carrier Dec 13 14:22:17.834869 env[1320]: time="2024-12-13T14:22:17.834574783Z" level=info msg="StartContainer for \"5e6652d8f683c6b209c57711d89ff86bda9e15ff2626348f322aa229ec76bcb4\" returns successfully" Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.726 [INFO][4250] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--8gfp4-eth0 csi-node-driver- calico-system edcf1038-965b-4103-930d-3cbf62798dd0 927 0 2024-12-13 14:21:53 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-8gfp4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3b46c960816 [] []}} ContainerID="00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" Namespace="calico-system" Pod="csi-node-driver-8gfp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gfp4-" Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.726 [INFO][4250] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" Namespace="calico-system" Pod="csi-node-driver-8gfp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.773 [INFO][4277] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" HandleID="k8s-pod-network.00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" Workload="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.785 [INFO][4277] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" HandleID="k8s-pod-network.00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" Workload="localhost-k8s-csi--node--driver--8gfp4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002f2980), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-8gfp4", "timestamp":"2024-12-13 14:22:17.773501585 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.785 [INFO][4277] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.785 [INFO][4277] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.785 [INFO][4277] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.787 [INFO][4277] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" host="localhost" Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.790 [INFO][4277] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.794 [INFO][4277] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.796 [INFO][4277] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.798 [INFO][4277] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.798 [INFO][4277] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" host="localhost" Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.800 [INFO][4277] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0 Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.806 [INFO][4277] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" host="localhost" Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.814 [INFO][4277] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" host="localhost" Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.814 [INFO][4277] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" host="localhost" Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.814 [INFO][4277] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:17.842650 env[1320]: 2024-12-13 14:22:17.814 [INFO][4277] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" HandleID="k8s-pod-network.00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" Workload="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:17.843309 env[1320]: 2024-12-13 14:22:17.819 [INFO][4250] cni-plugin/k8s.go 386: Populated endpoint ContainerID="00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" Namespace="calico-system" Pod="csi-node-driver-8gfp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gfp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8gfp4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"edcf1038-965b-4103-930d-3cbf62798dd0", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-8gfp4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b46c960816", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:17.843309 env[1320]: 2024-12-13 14:22:17.819 [INFO][4250] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" Namespace="calico-system" Pod="csi-node-driver-8gfp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:17.843309 env[1320]: 2024-12-13 14:22:17.819 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b46c960816 ContainerID="00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" Namespace="calico-system" Pod="csi-node-driver-8gfp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:17.843309 env[1320]: 2024-12-13 14:22:17.824 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" Namespace="calico-system" Pod="csi-node-driver-8gfp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:17.843309 env[1320]: 2024-12-13 14:22:17.828 [INFO][4250] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" Namespace="calico-system" Pod="csi-node-driver-8gfp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gfp4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8gfp4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"edcf1038-965b-4103-930d-3cbf62798dd0", ResourceVersion:"927", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0", Pod:"csi-node-driver-8gfp4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b46c960816", MAC:"be:b9:7a:5a:fb:01", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:17.843309 env[1320]: 2024-12-13 14:22:17.840 [INFO][4250] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0" Namespace="calico-system" Pod="csi-node-driver-8gfp4" WorkloadEndpoint="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:17.857000 audit[4330]: NETFILTER_CFG table=filter:112 family=2 entries=38 op=nft_register_chain pid=4330 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:22:17.857000 audit[4330]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19812 a0=3 a1=ffffd914bbf0 a2=0 a3=ffff92ceffa8 items=0 ppid=3527 pid=4330 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:17.857000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:22:17.859515 env[1320]: time="2024-12-13T14:22:17.859437550Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:22:17.859515 env[1320]: time="2024-12-13T14:22:17.859483551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:22:17.859515 env[1320]: time="2024-12-13T14:22:17.859494152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:22:17.859789 env[1320]: time="2024-12-13T14:22:17.859730556Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0 pid=4335 runtime=io.containerd.runc.v2 Dec 13 14:22:17.880474 systemd[1]: run-containerd-runc-k8s.io-00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0-runc.FhU8Zi.mount: Deactivated successfully. Dec 13 14:22:17.909585 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:22:17.922864 env[1320]: time="2024-12-13T14:22:17.922807713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8gfp4,Uid:edcf1038-965b-4103-930d-3cbf62798dd0,Namespace:calico-system,Attempt:1,} returns sandbox id \"00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0\"" Dec 13 14:22:17.925161 env[1320]: time="2024-12-13T14:22:17.925134559Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 14:22:18.575208 env[1320]: time="2024-12-13T14:22:18.575171632Z" level=info msg="StopPodSandbox for \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\"" Dec 13 14:22:18.575484 env[1320]: time="2024-12-13T14:22:18.575441797Z" level=info msg="StopPodSandbox for \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\"" Dec 13 14:22:18.632368 systemd-networkd[1096]: caliaccefbfdedf: Gained IPv6LL Dec 13 14:22:18.667053 env[1320]: 2024-12-13 14:22:18.620 [INFO][4408] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:18.667053 env[1320]: 2024-12-13 14:22:18.621 [INFO][4408] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" iface="eth0" netns="/var/run/netns/cni-72dec3f7-5b3e-a646-d4ed-664e6a3efb1b" Dec 13 14:22:18.667053 env[1320]: 2024-12-13 14:22:18.621 [INFO][4408] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" iface="eth0" netns="/var/run/netns/cni-72dec3f7-5b3e-a646-d4ed-664e6a3efb1b" Dec 13 14:22:18.667053 env[1320]: 2024-12-13 14:22:18.621 [INFO][4408] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" iface="eth0" netns="/var/run/netns/cni-72dec3f7-5b3e-a646-d4ed-664e6a3efb1b" Dec 13 14:22:18.667053 env[1320]: 2024-12-13 14:22:18.621 [INFO][4408] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:18.667053 env[1320]: 2024-12-13 14:22:18.621 [INFO][4408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:18.667053 env[1320]: 2024-12-13 14:22:18.651 [INFO][4424] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" HandleID="k8s-pod-network.5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Workload="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:18.667053 env[1320]: 2024-12-13 14:22:18.652 [INFO][4424] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:18.667053 env[1320]: 2024-12-13 14:22:18.653 [INFO][4424] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:18.667053 env[1320]: 2024-12-13 14:22:18.662 [WARNING][4424] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" HandleID="k8s-pod-network.5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Workload="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:18.667053 env[1320]: 2024-12-13 14:22:18.662 [INFO][4424] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" HandleID="k8s-pod-network.5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Workload="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:18.667053 env[1320]: 2024-12-13 14:22:18.663 [INFO][4424] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:18.667053 env[1320]: 2024-12-13 14:22:18.665 [INFO][4408] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:18.671157 env[1320]: time="2024-12-13T14:22:18.670754868Z" level=info msg="TearDown network for sandbox \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\" successfully" Dec 13 14:22:18.671157 env[1320]: time="2024-12-13T14:22:18.670791828Z" level=info msg="StopPodSandbox for \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\" returns successfully" Dec 13 14:22:18.671478 env[1320]: time="2024-12-13T14:22:18.671448961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd595757-m2x9g,Uid:4349a5e2-a677-4bbe-9d6f-1535050c8cda,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:22:18.673149 systemd[1]: run-netns-cni\x2d72dec3f7\x2d5b3e\x2da646\x2dd4ed\x2d664e6a3efb1b.mount: Deactivated successfully. Dec 13 14:22:18.684833 env[1320]: 2024-12-13 14:22:18.635 [INFO][4409] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:18.684833 env[1320]: 2024-12-13 14:22:18.635 [INFO][4409] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" iface="eth0" netns="/var/run/netns/cni-0fc361cc-ffb1-bad6-a5f9-b95dbb84503a" Dec 13 14:22:18.684833 env[1320]: 2024-12-13 14:22:18.635 [INFO][4409] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" iface="eth0" netns="/var/run/netns/cni-0fc361cc-ffb1-bad6-a5f9-b95dbb84503a" Dec 13 14:22:18.684833 env[1320]: 2024-12-13 14:22:18.635 [INFO][4409] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" iface="eth0" netns="/var/run/netns/cni-0fc361cc-ffb1-bad6-a5f9-b95dbb84503a" Dec 13 14:22:18.684833 env[1320]: 2024-12-13 14:22:18.635 [INFO][4409] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:18.684833 env[1320]: 2024-12-13 14:22:18.635 [INFO][4409] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:18.684833 env[1320]: 2024-12-13 14:22:18.666 [INFO][4430] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" HandleID="k8s-pod-network.61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Workload="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:18.684833 env[1320]: 2024-12-13 14:22:18.666 [INFO][4430] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:18.684833 env[1320]: 2024-12-13 14:22:18.666 [INFO][4430] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:18.684833 env[1320]: 2024-12-13 14:22:18.679 [WARNING][4430] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" HandleID="k8s-pod-network.61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Workload="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:18.684833 env[1320]: 2024-12-13 14:22:18.679 [INFO][4430] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" HandleID="k8s-pod-network.61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Workload="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:18.684833 env[1320]: 2024-12-13 14:22:18.681 [INFO][4430] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:18.684833 env[1320]: 2024-12-13 14:22:18.683 [INFO][4409] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:18.685412 env[1320]: time="2024-12-13T14:22:18.684962580Z" level=info msg="TearDown network for sandbox \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\" successfully" Dec 13 14:22:18.685412 env[1320]: time="2024-12-13T14:22:18.684988861Z" level=info msg="StopPodSandbox for \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\" returns successfully" Dec 13 14:22:18.685758 env[1320]: time="2024-12-13T14:22:18.685722315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd595757-gc4rp,Uid:610c2862-cbee-4137-8255-b514a33ef2be,Namespace:calico-apiserver,Attempt:1,}" Dec 13 14:22:18.700887 kubelet[2223]: E1213 14:22:18.698260 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:18.712629 kubelet[2223]: I1213 14:22:18.712602 2223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7778b75b4d-fztb7" podStartSLOduration=23.922199471 podStartE2EDuration="25.71256495s" podCreationTimestamp="2024-12-13 14:21:53 +0000 UTC" firstStartedPulling="2024-12-13 14:22:15.895435066 +0000 UTC m=+43.428376770" lastFinishedPulling="2024-12-13 14:22:17.685800545 +0000 UTC m=+45.218742249" observedRunningTime="2024-12-13 14:22:18.711838256 +0000 UTC m=+46.244780000" watchObservedRunningTime="2024-12-13 14:22:18.71256495 +0000 UTC m=+46.245506654" Dec 13 14:22:18.762418 systemd[1]: run-netns-cni\x2d0fc361cc\x2dffb1\x2dbad6\x2da5f9\x2db95dbb84503a.mount: Deactivated successfully. Dec 13 14:22:18.859556 systemd-networkd[1096]: califf2f3ad9b81: Link UP Dec 13 14:22:18.861436 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Dec 13 14:22:18.861503 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): califf2f3ad9b81: link becomes ready Dec 13 14:22:18.861661 systemd-networkd[1096]: califf2f3ad9b81: Gained carrier Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.782 [INFO][4456] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0 calico-apiserver-7cd595757- calico-apiserver 4349a5e2-a677-4bbe-9d6f-1535050c8cda 951 0 2024-12-13 14:21:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cd595757 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cd595757-m2x9g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califf2f3ad9b81 [] []}} ContainerID="252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-m2x9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--m2x9g-" Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.785 [INFO][4456] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-m2x9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.816 [INFO][4492] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" HandleID="k8s-pod-network.252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" Workload="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.826 [INFO][4492] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" HandleID="k8s-pod-network.252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" Workload="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003aa530), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cd595757-m2x9g", "timestamp":"2024-12-13 14:22:18.816364864 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.826 [INFO][4492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.826 [INFO][4492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.826 [INFO][4492] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.828 [INFO][4492] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" host="localhost" Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.831 [INFO][4492] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.840 [INFO][4492] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.842 [INFO][4492] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.844 [INFO][4492] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.845 [INFO][4492] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" host="localhost" Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.846 [INFO][4492] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.849 [INFO][4492] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" host="localhost" Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.854 [INFO][4492] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" host="localhost" Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.854 [INFO][4492] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" host="localhost" Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.855 [INFO][4492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:18.877149 env[1320]: 2024-12-13 14:22:18.855 [INFO][4492] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" HandleID="k8s-pod-network.252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" Workload="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:18.877711 env[1320]: 2024-12-13 14:22:18.857 [INFO][4456] cni-plugin/k8s.go 386: Populated endpoint ContainerID="252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-m2x9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0", GenerateName:"calico-apiserver-7cd595757-", Namespace:"calico-apiserver", SelfLink:"", UID:"4349a5e2-a677-4bbe-9d6f-1535050c8cda", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd595757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cd595757-m2x9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf2f3ad9b81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:18.877711 env[1320]: 2024-12-13 14:22:18.857 [INFO][4456] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-m2x9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:18.877711 env[1320]: 2024-12-13 14:22:18.857 [INFO][4456] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califf2f3ad9b81 ContainerID="252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-m2x9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:18.877711 env[1320]: 2024-12-13 14:22:18.861 [INFO][4456] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-m2x9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:18.877711 env[1320]: 2024-12-13 14:22:18.864 [INFO][4456] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-m2x9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0", GenerateName:"calico-apiserver-7cd595757-", Namespace:"calico-apiserver", SelfLink:"", UID:"4349a5e2-a677-4bbe-9d6f-1535050c8cda", ResourceVersion:"951", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd595757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d", Pod:"calico-apiserver-7cd595757-m2x9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf2f3ad9b81", MAC:"f2:30:39:bc:b6:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:18.877711 env[1320]: 2024-12-13 14:22:18.873 [INFO][4456] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-m2x9g" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:18.887000 audit[4520]: NETFILTER_CFG table=filter:113 family=2 entries=52 op=nft_register_chain pid=4520 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:22:18.887000 audit[4520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27040 a0=3 a1=ffffcc357d10 a2=0 a3=ffff94b23fa8 items=0 ppid=3527 pid=4520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:18.887000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:22:18.891671 systemd-networkd[1096]: cali9728563c66e: Link UP Dec 13 14:22:18.893021 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cali9728563c66e: link becomes ready Dec 13 14:22:18.892801 systemd-networkd[1096]: cali9728563c66e: Gained carrier Dec 13 14:22:18.895355 env[1320]: time="2024-12-13T14:22:18.895289979Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:22:18.895355 env[1320]: time="2024-12-13T14:22:18.895331300Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:22:18.895502 env[1320]: time="2024-12-13T14:22:18.895341420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:22:18.895621 env[1320]: time="2024-12-13T14:22:18.895590305Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d pid=4529 runtime=io.containerd.runc.v2 Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.792 [INFO][4459] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0 calico-apiserver-7cd595757- calico-apiserver 610c2862-cbee-4137-8255-b514a33ef2be 952 0 2024-12-13 14:21:53 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:7cd595757 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-7cd595757-gc4rp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9728563c66e [] []}} ContainerID="8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-gc4rp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--gc4rp-" Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.792 [INFO][4459] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-gc4rp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.832 [INFO][4497] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" HandleID="k8s-pod-network.8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" Workload="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.842 [INFO][4497] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" HandleID="k8s-pod-network.8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" Workload="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002daba0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-7cd595757-gc4rp", "timestamp":"2024-12-13 14:22:18.832201568 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.843 [INFO][4497] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.855 [INFO][4497] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.855 [INFO][4497] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.857 [INFO][4497] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" host="localhost" Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.865 [INFO][4497] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.869 [INFO][4497] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.874 [INFO][4497] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.876 [INFO][4497] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.877 [INFO][4497] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" host="localhost" Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.878 [INFO][4497] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94 Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.881 [INFO][4497] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" host="localhost" Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.886 [INFO][4497] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" host="localhost" Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.886 [INFO][4497] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" host="localhost" Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.886 [INFO][4497] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:18.913537 env[1320]: 2024-12-13 14:22:18.886 [INFO][4497] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" HandleID="k8s-pod-network.8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" Workload="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:18.914115 env[1320]: 2024-12-13 14:22:18.889 [INFO][4459] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-gc4rp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0", GenerateName:"calico-apiserver-7cd595757-", Namespace:"calico-apiserver", SelfLink:"", UID:"610c2862-cbee-4137-8255-b514a33ef2be", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd595757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-7cd595757-gc4rp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9728563c66e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:18.914115 env[1320]: 2024-12-13 14:22:18.889 [INFO][4459] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-gc4rp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:18.914115 env[1320]: 2024-12-13 14:22:18.889 [INFO][4459] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9728563c66e ContainerID="8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-gc4rp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:18.914115 env[1320]: 2024-12-13 14:22:18.892 [INFO][4459] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-gc4rp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:18.914115 env[1320]: 2024-12-13 14:22:18.893 [INFO][4459] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-gc4rp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0", GenerateName:"calico-apiserver-7cd595757-", Namespace:"calico-apiserver", SelfLink:"", UID:"610c2862-cbee-4137-8255-b514a33ef2be", ResourceVersion:"952", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd595757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94", Pod:"calico-apiserver-7cd595757-gc4rp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9728563c66e", MAC:"ea:56:18:3a:91:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:18.914115 env[1320]: 2024-12-13 14:22:18.907 [INFO][4459] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94" Namespace="calico-apiserver" Pod="calico-apiserver-7cd595757-gc4rp" WorkloadEndpoint="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:18.922000 audit[4557]: NETFILTER_CFG table=filter:114 family=2 entries=52 op=nft_register_chain pid=4557 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Dec 13 14:22:18.922000 audit[4557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26728 a0=3 a1=ffffcb88c910 a2=0 a3=ffff7f4a4fa8 items=0 ppid=3527 pid=4557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:18.922000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Dec 13 14:22:18.935559 env[1320]: time="2024-12-13T14:22:18.932765219Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 14:22:18.935559 env[1320]: time="2024-12-13T14:22:18.932811260Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 14:22:18.935559 env[1320]: time="2024-12-13T14:22:18.932821500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 14:22:18.935559 env[1320]: time="2024-12-13T14:22:18.933368070Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94 pid=4573 runtime=io.containerd.runc.v2 Dec 13 14:22:18.945675 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:22:18.987974 env[1320]: time="2024-12-13T14:22:18.987934598Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd595757-m2x9g,Uid:4349a5e2-a677-4bbe-9d6f-1535050c8cda,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d\"" Dec 13 14:22:18.988698 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 14:22:19.006322 env[1320]: time="2024-12-13T14:22:19.006282868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7cd595757-gc4rp,Uid:610c2862-cbee-4137-8255-b514a33ef2be,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94\"" Dec 13 14:22:19.101452 env[1320]: time="2024-12-13T14:22:19.101408659Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:19.103065 env[1320]: time="2024-12-13T14:22:19.103033529Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:19.104613 env[1320]: time="2024-12-13T14:22:19.104588559Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/csi:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:19.105897 env[1320]: time="2024-12-13T14:22:19.105867383Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:19.106262 env[1320]: time="2024-12-13T14:22:19.106231430Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:3c11734f3001b7070e7e2b5e64938f89891cf8c44f8997e86aa23c5d5bf70163\"" Dec 13 14:22:19.107688 env[1320]: time="2024-12-13T14:22:19.107614336Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:22:19.108233 env[1320]: time="2024-12-13T14:22:19.108193986Z" level=info msg="CreateContainer within sandbox \"00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 14:22:19.119067 env[1320]: time="2024-12-13T14:22:19.118978469Z" level=info msg="CreateContainer within sandbox \"00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"581be8429d1c8c19d2e7a1ccbeea0e232013818a43d638bd103783f3d713124d\"" Dec 13 14:22:19.119688 env[1320]: time="2024-12-13T14:22:19.119626202Z" level=info msg="StartContainer for \"581be8429d1c8c19d2e7a1ccbeea0e232013818a43d638bd103783f3d713124d\"" Dec 13 14:22:19.178254 env[1320]: time="2024-12-13T14:22:19.178200824Z" level=info msg="StartContainer for \"581be8429d1c8c19d2e7a1ccbeea0e232013818a43d638bd103783f3d713124d\" returns successfully" Dec 13 14:22:19.396038 systemd-networkd[1096]: cali3b46c960816: Gained IPv6LL Dec 13 14:22:20.676025 systemd-networkd[1096]: califf2f3ad9b81: Gained IPv6LL Dec 13 14:22:20.676268 systemd-networkd[1096]: cali9728563c66e: Gained IPv6LL Dec 13 14:22:21.158024 env[1320]: time="2024-12-13T14:22:21.157986536Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:21.160328 env[1320]: time="2024-12-13T14:22:21.160285858Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:21.161653 env[1320]: time="2024-12-13T14:22:21.161615442Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:21.163720 env[1320]: time="2024-12-13T14:22:21.163696600Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:21.164209 env[1320]: time="2024-12-13T14:22:21.164181289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 14:22:21.164902 env[1320]: time="2024-12-13T14:22:21.164723539Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 14:22:21.166890 env[1320]: time="2024-12-13T14:22:21.166860137Z" level=info msg="CreateContainer within sandbox \"252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:22:21.179244 env[1320]: time="2024-12-13T14:22:21.179212721Z" level=info msg="CreateContainer within sandbox \"252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e116d5d482a082486803449ee8ffdd17bee2db72ad546336077e35ec45e85d53\"" Dec 13 14:22:21.180815 env[1320]: time="2024-12-13T14:22:21.180779430Z" level=info msg="StartContainer for \"e116d5d482a082486803449ee8ffdd17bee2db72ad546336077e35ec45e85d53\"" Dec 13 14:22:21.350893 env[1320]: time="2024-12-13T14:22:21.350830112Z" level=info msg="StartContainer for \"e116d5d482a082486803449ee8ffdd17bee2db72ad546336077e35ec45e85d53\" returns successfully" Dec 13 14:22:21.383711 env[1320]: time="2024-12-13T14:22:21.383668467Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:21.386119 env[1320]: time="2024-12-13T14:22:21.386089831Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:21.388024 env[1320]: time="2024-12-13T14:22:21.387997386Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:21.389820 env[1320]: time="2024-12-13T14:22:21.389792258Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:21.390170 env[1320]: time="2024-12-13T14:22:21.390135264Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:5451b31bd8d0784796fa1204c4ec22975a270e21feadf2c5095fe41a38524c6c\"" Dec 13 14:22:21.391595 env[1320]: time="2024-12-13T14:22:21.391573490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 14:22:21.392830 env[1320]: time="2024-12-13T14:22:21.392801993Z" level=info msg="CreateContainer within sandbox \"8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 14:22:21.404166 env[1320]: time="2024-12-13T14:22:21.404133798Z" level=info msg="CreateContainer within sandbox \"8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"647e8290eb073a21602eb23ab3716eb8e7181076808d6527e7a9a71a17ad7dba\"" Dec 13 14:22:21.406477 env[1320]: time="2024-12-13T14:22:21.406452440Z" level=info msg="StartContainer for \"647e8290eb073a21602eb23ab3716eb8e7181076808d6527e7a9a71a17ad7dba\"" Dec 13 14:22:21.469613 env[1320]: time="2024-12-13T14:22:21.469520383Z" level=info msg="StartContainer for \"647e8290eb073a21602eb23ab3716eb8e7181076808d6527e7a9a71a17ad7dba\" returns successfully" Dec 13 14:22:21.719229 kubelet[2223]: I1213 14:22:21.719188 2223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cd595757-gc4rp" podStartSLOduration=26.33510291 podStartE2EDuration="28.718812702s" podCreationTimestamp="2024-12-13 14:21:53 +0000 UTC" firstStartedPulling="2024-12-13 14:22:19.007301008 +0000 UTC m=+46.540242712" lastFinishedPulling="2024-12-13 14:22:21.39101076 +0000 UTC m=+48.923952504" observedRunningTime="2024-12-13 14:22:21.718275212 +0000 UTC m=+49.251216916" watchObservedRunningTime="2024-12-13 14:22:21.718812702 +0000 UTC m=+49.251754406" Dec 13 14:22:21.746000 audit[4735]: NETFILTER_CFG table=filter:115 family=2 entries=10 op=nft_register_rule pid=4735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:21.746000 audit[4735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=ffffca8decb0 a2=0 a3=1 items=0 ppid=2387 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:21.746000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:21.755000 audit[4735]: NETFILTER_CFG table=nat:116 family=2 entries=20 op=nft_register_rule pid=4735 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:21.755000 audit[4735]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffca8decb0 a2=0 a3=1 items=0 ppid=2387 pid=4735 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:21.755000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:21.769000 audit[4737]: NETFILTER_CFG table=filter:117 family=2 entries=10 op=nft_register_rule pid=4737 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:21.769000 audit[4737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3676 a0=3 a1=fffffd196bc0 a2=0 a3=1 items=0 ppid=2387 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:21.769000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:21.774000 audit[4737]: NETFILTER_CFG table=nat:118 family=2 entries=20 op=nft_register_rule pid=4737 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:21.774000 audit[4737]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffd196bc0 a2=0 a3=1 items=0 ppid=2387 pid=4737 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:21.774000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:22.126935 systemd[1]: Started sshd@13-10.0.0.138:22-10.0.0.1:44712.service. Dec 13 14:22:22.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.138:22-10.0.0.1:44712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:22.128960 kernel: kauditd_printk_skb: 37 callbacks suppressed Dec 13 14:22:22.129033 kernel: audit: type=1130 audit(1734099742.126:471): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.138:22-10.0.0.1:44712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:22.182000 audit[4738]: USER_ACCT pid=4738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:22.183125 sshd[4738]: Accepted publickey for core from 10.0.0.1 port 44712 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:22.183000 audit[4738]: CRED_ACQ pid=4738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:22.188043 sshd[4738]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:22.189192 kernel: audit: type=1101 audit(1734099742.182:472): pid=4738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:22.189265 kernel: audit: type=1103 audit(1734099742.183:473): pid=4738 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:22.189289 kernel: audit: type=1006 audit(1734099742.183:474): pid=4738 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Dec 13 14:22:22.183000 audit[4738]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcabaf470 a2=3 a3=1 items=0 ppid=1 pid=4738 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:22.194581 kernel: audit: type=1300 audit(1734099742.183:474): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffcabaf470 a2=3 a3=1 items=0 ppid=1 pid=4738 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:22.183000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:22.195994 kernel: audit: type=1327 audit(1734099742.183:474): proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:22.197381 systemd-logind[1303]: New session 14 of user core. Dec 13 14:22:22.198349 systemd[1]: Started session-14.scope. Dec 13 14:22:22.201000 audit[4738]: USER_START pid=4738 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:22.202000 audit[4741]: CRED_ACQ pid=4741 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:22.211060 kernel: audit: type=1105 audit(1734099742.201:475): pid=4738 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:22.211148 kernel: audit: type=1103 audit(1734099742.202:476): pid=4741 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:22.382428 sshd[4738]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:22.382000 audit[4738]: USER_END pid=4738 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:22.384990 systemd[1]: sshd@13-10.0.0.138:22-10.0.0.1:44712.service: Deactivated successfully. Dec 13 14:22:22.385964 systemd-logind[1303]: Session 14 logged out. Waiting for processes to exit. Dec 13 14:22:22.386015 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 14:22:22.386807 systemd-logind[1303]: Removed session 14. Dec 13 14:22:22.382000 audit[4738]: CRED_DISP pid=4738 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:22.389996 kernel: audit: type=1106 audit(1734099742.382:477): pid=4738 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:22.390079 kernel: audit: type=1104 audit(1734099742.382:478): pid=4738 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:22.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.0.138:22-10.0.0.1:44712 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:22.529652 env[1320]: time="2024-12-13T14:22:22.529605872Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:22.530798 env[1320]: time="2024-12-13T14:22:22.530770452Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:22.532273 env[1320]: time="2024-12-13T14:22:22.532236918Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:22.533789 env[1320]: time="2024-12-13T14:22:22.533746545Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 14:22:22.534301 env[1320]: time="2024-12-13T14:22:22.534277555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:3eb557f7694f230afd24a75a691bcda4c0a7bfe87a981386dcd4ecf2b0701349\"" Dec 13 14:22:22.537073 env[1320]: time="2024-12-13T14:22:22.537041564Z" level=info msg="CreateContainer within sandbox \"00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 14:22:22.549169 env[1320]: time="2024-12-13T14:22:22.549136859Z" level=info msg="CreateContainer within sandbox \"00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"64befe30f3b91f9f0cd3258b2e6728ee923e1a59676f5f194ff1925e7df3eb32\"" Dec 13 14:22:22.551054 env[1320]: time="2024-12-13T14:22:22.550987572Z" level=info msg="StartContainer for \"64befe30f3b91f9f0cd3258b2e6728ee923e1a59676f5f194ff1925e7df3eb32\"" Dec 13 14:22:22.642074 env[1320]: time="2024-12-13T14:22:22.641434063Z" level=info msg="StartContainer for \"64befe30f3b91f9f0cd3258b2e6728ee923e1a59676f5f194ff1925e7df3eb32\" returns successfully" Dec 13 14:22:22.713996 kubelet[2223]: I1213 14:22:22.713960 2223 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 14:22:22.726082 kubelet[2223]: I1213 14:22:22.726036 2223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-7cd595757-m2x9g" podStartSLOduration=27.550569575 podStartE2EDuration="29.725999289s" podCreationTimestamp="2024-12-13 14:21:53 +0000 UTC" firstStartedPulling="2024-12-13 14:22:18.98905254 +0000 UTC m=+46.521994204" lastFinishedPulling="2024-12-13 14:22:21.164482174 +0000 UTC m=+48.697423918" observedRunningTime="2024-12-13 14:22:21.735873731 +0000 UTC m=+49.268815435" watchObservedRunningTime="2024-12-13 14:22:22.725999289 +0000 UTC m=+50.258940953" Dec 13 14:22:22.726508 kubelet[2223]: I1213 14:22:22.726489 2223 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-8gfp4" podStartSLOduration=25.115964916 podStartE2EDuration="29.726469538s" podCreationTimestamp="2024-12-13 14:21:53 +0000 UTC" firstStartedPulling="2024-12-13 14:22:17.924051938 +0000 UTC m=+45.456993602" lastFinishedPulling="2024-12-13 14:22:22.53455656 +0000 UTC m=+50.067498224" observedRunningTime="2024-12-13 14:22:22.7243737 +0000 UTC m=+50.257315404" watchObservedRunningTime="2024-12-13 14:22:22.726469538 +0000 UTC m=+50.259411242" Dec 13 14:22:23.148000 audit[4793]: NETFILTER_CFG table=filter:119 family=2 entries=9 op=nft_register_rule pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:23.148000 audit[4793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffc32d1650 a2=0 a3=1 items=0 ppid=2387 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:23.148000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:23.154000 audit[4793]: NETFILTER_CFG table=nat:120 family=2 entries=27 op=nft_register_chain pid=4793 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:23.154000 audit[4793]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=9348 a0=3 a1=ffffc32d1650 a2=0 a3=1 items=0 ppid=2387 pid=4793 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:23.154000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:23.650942 kubelet[2223]: I1213 14:22:23.650893 2223 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 14:22:23.656661 kubelet[2223]: I1213 14:22:23.656634 2223 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 14:22:26.767172 kubelet[2223]: E1213 14:22:26.767134 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:27.384085 systemd[1]: Started sshd@14-10.0.0.138:22-10.0.0.1:58462.service. Dec 13 14:22:27.385262 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:22:27.385302 kernel: audit: type=1130 audit(1734099747.383:482): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.138:22-10.0.0.1:58462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:27.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.138:22-10.0.0.1:58462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:27.433000 audit[4821]: USER_ACCT pid=4821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:27.434465 sshd[4821]: Accepted publickey for core from 10.0.0.1 port 58462 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:27.436067 sshd[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:27.434000 audit[4821]: CRED_ACQ pid=4821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:27.440574 kernel: audit: type=1101 audit(1734099747.433:483): pid=4821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:27.440650 kernel: audit: type=1103 audit(1734099747.434:484): pid=4821 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:27.440678 kernel: audit: type=1006 audit(1734099747.434:485): pid=4821 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Dec 13 14:22:27.441729 systemd-logind[1303]: New session 15 of user core. Dec 13 14:22:27.442192 systemd[1]: Started session-15.scope. Dec 13 14:22:27.434000 audit[4821]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe774a2a0 a2=3 a3=1 items=0 ppid=1 pid=4821 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:27.445703 kernel: audit: type=1300 audit(1734099747.434:485): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe774a2a0 a2=3 a3=1 items=0 ppid=1 pid=4821 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:27.434000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:27.446945 kernel: audit: type=1327 audit(1734099747.434:485): proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:27.447022 kernel: audit: type=1105 audit(1734099747.445:486): pid=4821 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:27.445000 audit[4821]: USER_START pid=4821 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:27.446000 audit[4824]: CRED_ACQ pid=4824 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:27.453232 kernel: audit: type=1103 audit(1734099747.446:487): pid=4824 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:27.614739 sshd[4821]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:27.615000 audit[4821]: USER_END pid=4821 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:27.619486 systemd-logind[1303]: Session 15 logged out. Waiting for processes to exit. Dec 13 14:22:27.619633 systemd[1]: sshd@14-10.0.0.138:22-10.0.0.1:58462.service: Deactivated successfully. Dec 13 14:22:27.615000 audit[4821]: CRED_DISP pid=4821 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:27.620484 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 14:22:27.620881 systemd-logind[1303]: Removed session 15. Dec 13 14:22:27.623383 kernel: audit: type=1106 audit(1734099747.615:488): pid=4821 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:27.623463 kernel: audit: type=1104 audit(1734099747.615:489): pid=4821 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:27.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.0.138:22-10.0.0.1:58462 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:32.542781 env[1320]: time="2024-12-13T14:22:32.542414888Z" level=info msg="StopPodSandbox for \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\"" Dec 13 14:22:32.618186 systemd[1]: Started sshd@15-10.0.0.138:22-10.0.0.1:46742.service. Dec 13 14:22:32.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.138:22-10.0.0.1:46742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:32.619006 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:22:32.619065 kernel: audit: type=1130 audit(1734099752.617:491): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.138:22-10.0.0.1:46742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:32.638584 env[1320]: 2024-12-13 14:22:32.577 [WARNING][4861] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0", GenerateName:"calico-apiserver-7cd595757-", Namespace:"calico-apiserver", SelfLink:"", UID:"610c2862-cbee-4137-8255-b514a33ef2be", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd595757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94", Pod:"calico-apiserver-7cd595757-gc4rp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9728563c66e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:32.638584 env[1320]: 2024-12-13 14:22:32.577 [INFO][4861] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:32.638584 env[1320]: 2024-12-13 14:22:32.577 [INFO][4861] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" iface="eth0" netns="" Dec 13 14:22:32.638584 env[1320]: 2024-12-13 14:22:32.577 [INFO][4861] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:32.638584 env[1320]: 2024-12-13 14:22:32.578 [INFO][4861] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:32.638584 env[1320]: 2024-12-13 14:22:32.608 [INFO][4870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" HandleID="k8s-pod-network.61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Workload="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:32.638584 env[1320]: 2024-12-13 14:22:32.608 [INFO][4870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:32.638584 env[1320]: 2024-12-13 14:22:32.608 [INFO][4870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:32.638584 env[1320]: 2024-12-13 14:22:32.623 [WARNING][4870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" HandleID="k8s-pod-network.61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Workload="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:32.638584 env[1320]: 2024-12-13 14:22:32.623 [INFO][4870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" HandleID="k8s-pod-network.61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Workload="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:32.638584 env[1320]: 2024-12-13 14:22:32.624 [INFO][4870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:32.638584 env[1320]: 2024-12-13 14:22:32.631 [INFO][4861] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:32.638584 env[1320]: time="2024-12-13T14:22:32.638301861Z" level=info msg="TearDown network for sandbox \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\" successfully" Dec 13 14:22:32.638584 env[1320]: time="2024-12-13T14:22:32.638332421Z" level=info msg="StopPodSandbox for \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\" returns successfully" Dec 13 14:22:32.639726 env[1320]: time="2024-12-13T14:22:32.639692802Z" level=info msg="RemovePodSandbox for \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\"" Dec 13 14:22:32.639797 env[1320]: time="2024-12-13T14:22:32.639729243Z" level=info msg="Forcibly stopping sandbox \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\"" Dec 13 14:22:32.672000 audit[4879]: USER_ACCT pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.673647 sshd[4879]: Accepted publickey for core from 10.0.0.1 port 46742 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:32.675404 sshd[4879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:32.674000 audit[4879]: CRED_ACQ pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.680025 kernel: audit: type=1101 audit(1734099752.672:492): pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.680091 kernel: audit: type=1103 audit(1734099752.674:493): pid=4879 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.680113 kernel: audit: type=1006 audit(1734099752.674:494): pid=4879 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Dec 13 14:22:32.679788 systemd-logind[1303]: New session 16 of user core. Dec 13 14:22:32.680272 systemd[1]: Started session-16.scope. Dec 13 14:22:32.681791 kernel: audit: type=1300 audit(1734099752.674:494): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc9758940 a2=3 a3=1 items=0 ppid=1 pid=4879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:32.674000 audit[4879]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffc9758940 a2=3 a3=1 items=0 ppid=1 pid=4879 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:32.674000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:32.686356 kernel: audit: type=1327 audit(1734099752.674:494): proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:32.686000 audit[4879]: USER_START pid=4879 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.687000 audit[4926]: CRED_ACQ pid=4926 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.694184 kernel: audit: type=1105 audit(1734099752.686:495): pid=4879 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.694252 kernel: audit: type=1103 audit(1734099752.687:496): pid=4926 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.721376 env[1320]: 2024-12-13 14:22:32.689 [WARNING][4913] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0", GenerateName:"calico-apiserver-7cd595757-", Namespace:"calico-apiserver", SelfLink:"", UID:"610c2862-cbee-4137-8255-b514a33ef2be", ResourceVersion:"984", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd595757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8dce2ed7e244c964c8a6bd2f311a5aef087a720f23d0f777d70309b3ecf78a94", Pod:"calico-apiserver-7cd595757-gc4rp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9728563c66e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:32.721376 env[1320]: 2024-12-13 14:22:32.689 [INFO][4913] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:32.721376 env[1320]: 2024-12-13 14:22:32.689 [INFO][4913] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" iface="eth0" netns="" Dec 13 14:22:32.721376 env[1320]: 2024-12-13 14:22:32.689 [INFO][4913] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:32.721376 env[1320]: 2024-12-13 14:22:32.689 [INFO][4913] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:32.721376 env[1320]: 2024-12-13 14:22:32.708 [INFO][4927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" HandleID="k8s-pod-network.61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Workload="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:32.721376 env[1320]: 2024-12-13 14:22:32.708 [INFO][4927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:32.721376 env[1320]: 2024-12-13 14:22:32.708 [INFO][4927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:32.721376 env[1320]: 2024-12-13 14:22:32.716 [WARNING][4927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" HandleID="k8s-pod-network.61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Workload="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:32.721376 env[1320]: 2024-12-13 14:22:32.716 [INFO][4927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" HandleID="k8s-pod-network.61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Workload="localhost-k8s-calico--apiserver--7cd595757--gc4rp-eth0" Dec 13 14:22:32.721376 env[1320]: 2024-12-13 14:22:32.718 [INFO][4927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:32.721376 env[1320]: 2024-12-13 14:22:32.720 [INFO][4913] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b" Dec 13 14:22:32.721764 env[1320]: time="2024-12-13T14:22:32.721402594Z" level=info msg="TearDown network for sandbox \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\" successfully" Dec 13 14:22:32.727898 env[1320]: time="2024-12-13T14:22:32.727861495Z" level=info msg="RemovePodSandbox \"61446eef4852b9f443e6723a893c3ea66bc61082ea74625ff0978957362ec02b\" returns successfully" Dec 13 14:22:32.728462 env[1320]: time="2024-12-13T14:22:32.728423823Z" level=info msg="StopPodSandbox for \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\"" Dec 13 14:22:32.821261 env[1320]: 2024-12-13 14:22:32.778 [WARNING][4957] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8gfp4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"edcf1038-965b-4103-930d-3cbf62798dd0", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0", Pod:"csi-node-driver-8gfp4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b46c960816", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:32.821261 env[1320]: 2024-12-13 14:22:32.778 [INFO][4957] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:32.821261 env[1320]: 2024-12-13 14:22:32.779 [INFO][4957] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" iface="eth0" netns="" Dec 13 14:22:32.821261 env[1320]: 2024-12-13 14:22:32.779 [INFO][4957] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:32.821261 env[1320]: 2024-12-13 14:22:32.779 [INFO][4957] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:32.821261 env[1320]: 2024-12-13 14:22:32.799 [INFO][4967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" HandleID="k8s-pod-network.cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Workload="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:32.821261 env[1320]: 2024-12-13 14:22:32.799 [INFO][4967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:32.821261 env[1320]: 2024-12-13 14:22:32.799 [INFO][4967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:32.821261 env[1320]: 2024-12-13 14:22:32.810 [WARNING][4967] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" HandleID="k8s-pod-network.cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Workload="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:32.821261 env[1320]: 2024-12-13 14:22:32.811 [INFO][4967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" HandleID="k8s-pod-network.cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Workload="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:32.821261 env[1320]: 2024-12-13 14:22:32.812 [INFO][4967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:32.821261 env[1320]: 2024-12-13 14:22:32.815 [INFO][4957] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:32.821261 env[1320]: time="2024-12-13T14:22:32.821182467Z" level=info msg="TearDown network for sandbox \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\" successfully" Dec 13 14:22:32.821261 env[1320]: time="2024-12-13T14:22:32.821211068Z" level=info msg="StopPodSandbox for \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\" returns successfully" Dec 13 14:22:32.825829 env[1320]: time="2024-12-13T14:22:32.825797499Z" level=info msg="RemovePodSandbox for \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\"" Dec 13 14:22:32.825996 env[1320]: time="2024-12-13T14:22:32.825956301Z" level=info msg="Forcibly stopping sandbox \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\"" Dec 13 14:22:32.845167 sshd[4879]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:32.845000 audit[4879]: USER_END pid=4879 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.846726 systemd[1]: Started sshd@16-10.0.0.138:22-10.0.0.1:46746.service. Dec 13 14:22:32.845000 audit[4879]: CRED_DISP pid=4879 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.851460 systemd[1]: sshd@15-10.0.0.138:22-10.0.0.1:46742.service: Deactivated successfully. Dec 13 14:22:32.852461 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 14:22:32.852490 systemd-logind[1303]: Session 16 logged out. Waiting for processes to exit. Dec 13 14:22:32.854429 kernel: audit: type=1106 audit(1734099752.845:497): pid=4879 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.854504 kernel: audit: type=1104 audit(1734099752.845:498): pid=4879 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.138:22-10.0.0.1:46746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:32.850000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.0.138:22-10.0.0.1:46742 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:32.854183 systemd-logind[1303]: Removed session 16. Dec 13 14:22:32.898000 audit[4996]: USER_ACCT pid=4996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.899771 sshd[4996]: Accepted publickey for core from 10.0.0.1 port 46746 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:32.901000 audit[4996]: CRED_ACQ pid=4996 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.901000 audit[4996]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffff1a635a0 a2=3 a3=1 items=0 ppid=1 pid=4996 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:32.901000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:32.902239 sshd[4996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:32.905960 systemd-logind[1303]: New session 17 of user core. Dec 13 14:22:32.906766 systemd[1]: Started session-17.scope. Dec 13 14:22:32.908162 env[1320]: 2024-12-13 14:22:32.874 [WARNING][4990] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--8gfp4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"edcf1038-965b-4103-930d-3cbf62798dd0", ResourceVersion:"994", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"00c10a8d2ffda0346c7337b0013c90a100a48fb7bced0592bfabfc0375eee7e0", Pod:"csi-node-driver-8gfp4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b46c960816", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:32.908162 env[1320]: 2024-12-13 14:22:32.874 [INFO][4990] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:32.908162 env[1320]: 2024-12-13 14:22:32.874 [INFO][4990] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" iface="eth0" netns="" Dec 13 14:22:32.908162 env[1320]: 2024-12-13 14:22:32.875 [INFO][4990] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:32.908162 env[1320]: 2024-12-13 14:22:32.875 [INFO][4990] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:32.908162 env[1320]: 2024-12-13 14:22:32.892 [INFO][5001] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" HandleID="k8s-pod-network.cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Workload="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:32.908162 env[1320]: 2024-12-13 14:22:32.892 [INFO][5001] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:32.908162 env[1320]: 2024-12-13 14:22:32.892 [INFO][5001] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:32.908162 env[1320]: 2024-12-13 14:22:32.900 [WARNING][5001] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" HandleID="k8s-pod-network.cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Workload="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:32.908162 env[1320]: 2024-12-13 14:22:32.900 [INFO][5001] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" HandleID="k8s-pod-network.cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Workload="localhost-k8s-csi--node--driver--8gfp4-eth0" Dec 13 14:22:32.908162 env[1320]: 2024-12-13 14:22:32.901 [INFO][5001] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:32.908162 env[1320]: 2024-12-13 14:22:32.904 [INFO][4990] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9" Dec 13 14:22:32.908531 env[1320]: time="2024-12-13T14:22:32.908191301Z" level=info msg="TearDown network for sandbox \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\" successfully" Dec 13 14:22:32.911533 env[1320]: time="2024-12-13T14:22:32.911497153Z" level=info msg="RemovePodSandbox \"cfcfb05b5a0bade7ff9aa1751d9aacc5a082524c7b8ad1869b25ce968349b3b9\" returns successfully" Dec 13 14:22:32.912049 env[1320]: time="2024-12-13T14:22:32.911972440Z" level=info msg="StopPodSandbox for \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\"" Dec 13 14:22:32.911000 audit[4996]: USER_START pid=4996 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.912000 audit[5009]: CRED_ACQ pid=5009 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:32.979362 env[1320]: 2024-12-13 14:22:32.944 [WARNING][5025] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bxn7n-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"08600e9c-2e7c-44be-b230-0c231e6c0b50", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11", Pod:"coredns-76f75df574-bxn7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali938c225f7bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:32.979362 env[1320]: 2024-12-13 14:22:32.944 [INFO][5025] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:32.979362 env[1320]: 2024-12-13 14:22:32.944 [INFO][5025] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" iface="eth0" netns="" Dec 13 14:22:32.979362 env[1320]: 2024-12-13 14:22:32.944 [INFO][5025] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:32.979362 env[1320]: 2024-12-13 14:22:32.944 [INFO][5025] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:32.979362 env[1320]: 2024-12-13 14:22:32.963 [INFO][5032] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" HandleID="k8s-pod-network.9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Workload="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:32.979362 env[1320]: 2024-12-13 14:22:32.963 [INFO][5032] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:32.979362 env[1320]: 2024-12-13 14:22:32.964 [INFO][5032] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:32.979362 env[1320]: 2024-12-13 14:22:32.973 [WARNING][5032] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" HandleID="k8s-pod-network.9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Workload="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:32.979362 env[1320]: 2024-12-13 14:22:32.973 [INFO][5032] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" HandleID="k8s-pod-network.9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Workload="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:32.979362 env[1320]: 2024-12-13 14:22:32.974 [INFO][5032] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:32.979362 env[1320]: 2024-12-13 14:22:32.977 [INFO][5025] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:32.979903 env[1320]: time="2024-12-13T14:22:32.979382889Z" level=info msg="TearDown network for sandbox \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\" successfully" Dec 13 14:22:32.979903 env[1320]: time="2024-12-13T14:22:32.979413930Z" level=info msg="StopPodSandbox for \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\" returns successfully" Dec 13 14:22:32.979903 env[1320]: time="2024-12-13T14:22:32.979882457Z" level=info msg="RemovePodSandbox for \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\"" Dec 13 14:22:32.979979 env[1320]: time="2024-12-13T14:22:32.979912817Z" level=info msg="Forcibly stopping sandbox \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\"" Dec 13 14:22:33.056866 env[1320]: 2024-12-13 14:22:33.021 [WARNING][5061] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--bxn7n-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"08600e9c-2e7c-44be-b230-0c231e6c0b50", ResourceVersion:"902", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"57b76312d10f99d90165a662952c616ebb3b72dcc165b50657c6192d134fce11", Pod:"coredns-76f75df574-bxn7n", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali938c225f7bd", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:33.056866 env[1320]: 2024-12-13 14:22:33.021 [INFO][5061] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:33.056866 env[1320]: 2024-12-13 14:22:33.021 [INFO][5061] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" iface="eth0" netns="" Dec 13 14:22:33.056866 env[1320]: 2024-12-13 14:22:33.021 [INFO][5061] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:33.056866 env[1320]: 2024-12-13 14:22:33.021 [INFO][5061] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:33.056866 env[1320]: 2024-12-13 14:22:33.039 [INFO][5069] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" HandleID="k8s-pod-network.9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Workload="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:33.056866 env[1320]: 2024-12-13 14:22:33.039 [INFO][5069] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:33.056866 env[1320]: 2024-12-13 14:22:33.039 [INFO][5069] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:33.056866 env[1320]: 2024-12-13 14:22:33.051 [WARNING][5069] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" HandleID="k8s-pod-network.9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Workload="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:33.056866 env[1320]: 2024-12-13 14:22:33.051 [INFO][5069] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" HandleID="k8s-pod-network.9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Workload="localhost-k8s-coredns--76f75df574--bxn7n-eth0" Dec 13 14:22:33.056866 env[1320]: 2024-12-13 14:22:33.053 [INFO][5069] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:33.056866 env[1320]: 2024-12-13 14:22:33.055 [INFO][5061] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8" Dec 13 14:22:33.056866 env[1320]: time="2024-12-13T14:22:33.056664163Z" level=info msg="TearDown network for sandbox \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\" successfully" Dec 13 14:22:33.063259 env[1320]: time="2024-12-13T14:22:33.063226744Z" level=info msg="RemovePodSandbox \"9ef9673691f52e9393622647fdcb240f8ad98495e844b41fe2e89f4a735a74e8\" returns successfully" Dec 13 14:22:33.063733 env[1320]: time="2024-12-13T14:22:33.063706192Z" level=info msg="StopPodSandbox for \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\"" Dec 13 14:22:33.134971 env[1320]: 2024-12-13 14:22:33.097 [WARNING][5093] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0", GenerateName:"calico-kube-controllers-7778b75b4d-", Namespace:"calico-system", SelfLink:"", UID:"accb085d-f789-4c9c-a736-d74c6e73b549", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7778b75b4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c", Pod:"calico-kube-controllers-7778b75b4d-fztb7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa663756530", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:33.134971 env[1320]: 2024-12-13 14:22:33.098 [INFO][5093] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:33.134971 env[1320]: 2024-12-13 14:22:33.098 [INFO][5093] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" iface="eth0" netns="" Dec 13 14:22:33.134971 env[1320]: 2024-12-13 14:22:33.098 [INFO][5093] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:33.134971 env[1320]: 2024-12-13 14:22:33.098 [INFO][5093] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:33.134971 env[1320]: 2024-12-13 14:22:33.121 [INFO][5100] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" HandleID="k8s-pod-network.561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Workload="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:33.134971 env[1320]: 2024-12-13 14:22:33.121 [INFO][5100] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:33.134971 env[1320]: 2024-12-13 14:22:33.121 [INFO][5100] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:33.134971 env[1320]: 2024-12-13 14:22:33.129 [WARNING][5100] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" HandleID="k8s-pod-network.561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Workload="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:33.134971 env[1320]: 2024-12-13 14:22:33.129 [INFO][5100] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" HandleID="k8s-pod-network.561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Workload="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:33.134971 env[1320]: 2024-12-13 14:22:33.130 [INFO][5100] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:33.134971 env[1320]: 2024-12-13 14:22:33.133 [INFO][5093] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:33.135552 env[1320]: time="2024-12-13T14:22:33.135509698Z" level=info msg="TearDown network for sandbox \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\" successfully" Dec 13 14:22:33.135630 env[1320]: time="2024-12-13T14:22:33.135612940Z" level=info msg="StopPodSandbox for \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\" returns successfully" Dec 13 14:22:33.136182 env[1320]: time="2024-12-13T14:22:33.136158388Z" level=info msg="RemovePodSandbox for \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\"" Dec 13 14:22:33.136327 env[1320]: time="2024-12-13T14:22:33.136286550Z" level=info msg="Forcibly stopping sandbox \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\"" Dec 13 14:22:33.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.138:22-10.0.0.1:46748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:33.144482 systemd[1]: Started sshd@17-10.0.0.138:22-10.0.0.1:46748.service. Dec 13 14:22:33.144909 sshd[4996]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:33.145000 audit[4996]: USER_END pid=4996 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:33.145000 audit[4996]: CRED_DISP pid=4996 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:33.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.0.138:22-10.0.0.1:46746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:33.147392 systemd[1]: sshd@16-10.0.0.138:22-10.0.0.1:46746.service: Deactivated successfully. Dec 13 14:22:33.149196 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 14:22:33.149703 systemd-logind[1303]: Session 17 logged out. Waiting for processes to exit. Dec 13 14:22:33.150467 systemd-logind[1303]: Removed session 17. Dec 13 14:22:33.193000 audit[5126]: USER_ACCT pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:33.194463 sshd[5126]: Accepted publickey for core from 10.0.0.1 port 46748 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:33.194000 audit[5126]: CRED_ACQ pid=5126 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:33.194000 audit[5126]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffef715de0 a2=3 a3=1 items=0 ppid=1 pid=5126 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:33.194000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:33.195705 sshd[5126]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:33.200221 systemd[1]: Started session-18.scope. Dec 13 14:22:33.200567 systemd-logind[1303]: New session 18 of user core. Dec 13 14:22:33.204000 audit[5126]: USER_START pid=5126 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:33.205000 audit[5143]: CRED_ACQ pid=5143 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:33.214419 env[1320]: 2024-12-13 14:22:33.173 [WARNING][5123] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0", GenerateName:"calico-kube-controllers-7778b75b4d-", Namespace:"calico-system", SelfLink:"", UID:"accb085d-f789-4c9c-a736-d74c6e73b549", ResourceVersion:"956", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7778b75b4d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5ec2e1352712edd4db22335cd9678eeeb0addbf347c3ee264a568b89cf28918c", Pod:"calico-kube-controllers-7778b75b4d-fztb7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"caliaa663756530", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:33.214419 env[1320]: 2024-12-13 14:22:33.174 [INFO][5123] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:33.214419 env[1320]: 2024-12-13 14:22:33.174 [INFO][5123] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" iface="eth0" netns="" Dec 13 14:22:33.214419 env[1320]: 2024-12-13 14:22:33.174 [INFO][5123] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:33.214419 env[1320]: 2024-12-13 14:22:33.174 [INFO][5123] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:33.214419 env[1320]: 2024-12-13 14:22:33.194 [INFO][5135] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" HandleID="k8s-pod-network.561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Workload="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:33.214419 env[1320]: 2024-12-13 14:22:33.194 [INFO][5135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:33.214419 env[1320]: 2024-12-13 14:22:33.194 [INFO][5135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:33.214419 env[1320]: 2024-12-13 14:22:33.206 [WARNING][5135] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" HandleID="k8s-pod-network.561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Workload="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:33.214419 env[1320]: 2024-12-13 14:22:33.206 [INFO][5135] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" HandleID="k8s-pod-network.561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Workload="localhost-k8s-calico--kube--controllers--7778b75b4d--fztb7-eth0" Dec 13 14:22:33.214419 env[1320]: 2024-12-13 14:22:33.207 [INFO][5135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:33.214419 env[1320]: 2024-12-13 14:22:33.209 [INFO][5123] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60" Dec 13 14:22:33.214935 env[1320]: time="2024-12-13T14:22:33.214891601Z" level=info msg="TearDown network for sandbox \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\" successfully" Dec 13 14:22:33.217967 env[1320]: time="2024-12-13T14:22:33.217938608Z" level=info msg="RemovePodSandbox \"561ba1ce397dd9f102d8aa9d5f964096dac58685cd09c1d1444ed7ca82832e60\" returns successfully" Dec 13 14:22:33.218557 env[1320]: time="2024-12-13T14:22:33.218530257Z" level=info msg="StopPodSandbox for \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\"" Dec 13 14:22:33.284856 env[1320]: 2024-12-13 14:22:33.251 [WARNING][5159] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0", GenerateName:"calico-apiserver-7cd595757-", Namespace:"calico-apiserver", SelfLink:"", UID:"4349a5e2-a677-4bbe-9d6f-1535050c8cda", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd595757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d", Pod:"calico-apiserver-7cd595757-m2x9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf2f3ad9b81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:33.284856 env[1320]: 2024-12-13 14:22:33.252 [INFO][5159] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:33.284856 env[1320]: 2024-12-13 14:22:33.252 [INFO][5159] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" iface="eth0" netns="" Dec 13 14:22:33.284856 env[1320]: 2024-12-13 14:22:33.252 [INFO][5159] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:33.284856 env[1320]: 2024-12-13 14:22:33.252 [INFO][5159] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:33.284856 env[1320]: 2024-12-13 14:22:33.271 [INFO][5172] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" HandleID="k8s-pod-network.5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Workload="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:33.284856 env[1320]: 2024-12-13 14:22:33.271 [INFO][5172] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:33.284856 env[1320]: 2024-12-13 14:22:33.272 [INFO][5172] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:33.284856 env[1320]: 2024-12-13 14:22:33.279 [WARNING][5172] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" HandleID="k8s-pod-network.5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Workload="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:33.284856 env[1320]: 2024-12-13 14:22:33.279 [INFO][5172] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" HandleID="k8s-pod-network.5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Workload="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:33.284856 env[1320]: 2024-12-13 14:22:33.281 [INFO][5172] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:33.284856 env[1320]: 2024-12-13 14:22:33.283 [INFO][5159] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:33.285291 env[1320]: time="2024-12-13T14:22:33.284899320Z" level=info msg="TearDown network for sandbox \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\" successfully" Dec 13 14:22:33.285291 env[1320]: time="2024-12-13T14:22:33.284930321Z" level=info msg="StopPodSandbox for \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\" returns successfully" Dec 13 14:22:33.285568 env[1320]: time="2024-12-13T14:22:33.285535810Z" level=info msg="RemovePodSandbox for \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\"" Dec 13 14:22:33.285784 env[1320]: time="2024-12-13T14:22:33.285745973Z" level=info msg="Forcibly stopping sandbox \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\"" Dec 13 14:22:33.353237 env[1320]: 2024-12-13 14:22:33.321 [WARNING][5197] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0", GenerateName:"calico-apiserver-7cd595757-", Namespace:"calico-apiserver", SelfLink:"", UID:"4349a5e2-a677-4bbe-9d6f-1535050c8cda", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"7cd595757", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"252ad8fb8968a6f452b3383dd6292867ebef80dbd07353f3e1223c97bb3f953d", Pod:"calico-apiserver-7cd595757-m2x9g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califf2f3ad9b81", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:33.353237 env[1320]: 2024-12-13 14:22:33.321 [INFO][5197] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:33.353237 env[1320]: 2024-12-13 14:22:33.321 [INFO][5197] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" iface="eth0" netns="" Dec 13 14:22:33.353237 env[1320]: 2024-12-13 14:22:33.321 [INFO][5197] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:33.353237 env[1320]: 2024-12-13 14:22:33.321 [INFO][5197] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:33.353237 env[1320]: 2024-12-13 14:22:33.340 [INFO][5205] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" HandleID="k8s-pod-network.5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Workload="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:33.353237 env[1320]: 2024-12-13 14:22:33.340 [INFO][5205] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:33.353237 env[1320]: 2024-12-13 14:22:33.340 [INFO][5205] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:33.353237 env[1320]: 2024-12-13 14:22:33.348 [WARNING][5205] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" HandleID="k8s-pod-network.5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Workload="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:33.353237 env[1320]: 2024-12-13 14:22:33.348 [INFO][5205] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" HandleID="k8s-pod-network.5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Workload="localhost-k8s-calico--apiserver--7cd595757--m2x9g-eth0" Dec 13 14:22:33.353237 env[1320]: 2024-12-13 14:22:33.349 [INFO][5205] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:33.353237 env[1320]: 2024-12-13 14:22:33.351 [INFO][5197] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f" Dec 13 14:22:33.353750 env[1320]: time="2024-12-13T14:22:33.353713580Z" level=info msg="TearDown network for sandbox \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\" successfully" Dec 13 14:22:33.356395 env[1320]: time="2024-12-13T14:22:33.356363741Z" level=info msg="RemovePodSandbox \"5cddf81831cb8c391f541777d78abb97987ca20d1d2bdf9e20f8cde6ce04c79f\" returns successfully" Dec 13 14:22:33.357040 env[1320]: time="2024-12-13T14:22:33.357013471Z" level=info msg="StopPodSandbox for \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\"" Dec 13 14:22:33.426890 env[1320]: 2024-12-13 14:22:33.390 [WARNING][5228] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hzwcj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f9893a5a-eda8-409b-b506-579bb2498aa1", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018", Pod:"coredns-76f75df574-hzwcj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaccefbfdedf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:33.426890 env[1320]: 2024-12-13 14:22:33.391 [INFO][5228] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:33.426890 env[1320]: 2024-12-13 14:22:33.391 [INFO][5228] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" iface="eth0" netns="" Dec 13 14:22:33.426890 env[1320]: 2024-12-13 14:22:33.391 [INFO][5228] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:33.426890 env[1320]: 2024-12-13 14:22:33.391 [INFO][5228] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:33.426890 env[1320]: 2024-12-13 14:22:33.412 [INFO][5235] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" HandleID="k8s-pod-network.3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Workload="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:33.426890 env[1320]: 2024-12-13 14:22:33.412 [INFO][5235] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:33.426890 env[1320]: 2024-12-13 14:22:33.412 [INFO][5235] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:33.426890 env[1320]: 2024-12-13 14:22:33.421 [WARNING][5235] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" HandleID="k8s-pod-network.3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Workload="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:33.426890 env[1320]: 2024-12-13 14:22:33.421 [INFO][5235] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" HandleID="k8s-pod-network.3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Workload="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:33.426890 env[1320]: 2024-12-13 14:22:33.422 [INFO][5235] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:33.426890 env[1320]: 2024-12-13 14:22:33.424 [INFO][5228] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:33.426890 env[1320]: time="2024-12-13T14:22:33.426132816Z" level=info msg="TearDown network for sandbox \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\" successfully" Dec 13 14:22:33.426890 env[1320]: time="2024-12-13T14:22:33.426162377Z" level=info msg="StopPodSandbox for \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\" returns successfully" Dec 13 14:22:33.427352 env[1320]: time="2024-12-13T14:22:33.426975429Z" level=info msg="RemovePodSandbox for \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\"" Dec 13 14:22:33.427352 env[1320]: time="2024-12-13T14:22:33.427007550Z" level=info msg="Forcibly stopping sandbox \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\"" Dec 13 14:22:33.501094 env[1320]: 2024-12-13 14:22:33.459 [WARNING][5259] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--76f75df574--hzwcj-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"f9893a5a-eda8-409b-b506-579bb2498aa1", ResourceVersion:"935", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 14, 21, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f5116ee1a8926435949c135ccdcee476577fe23b0ca6bae9455c6b520db2d018", Pod:"coredns-76f75df574-hzwcj", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaccefbfdedf", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 14:22:33.501094 env[1320]: 2024-12-13 14:22:33.460 [INFO][5259] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:33.501094 env[1320]: 2024-12-13 14:22:33.460 [INFO][5259] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" iface="eth0" netns="" Dec 13 14:22:33.501094 env[1320]: 2024-12-13 14:22:33.460 [INFO][5259] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:33.501094 env[1320]: 2024-12-13 14:22:33.460 [INFO][5259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:33.501094 env[1320]: 2024-12-13 14:22:33.479 [INFO][5266] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" HandleID="k8s-pod-network.3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Workload="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:33.501094 env[1320]: 2024-12-13 14:22:33.479 [INFO][5266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 14:22:33.501094 env[1320]: 2024-12-13 14:22:33.479 [INFO][5266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 14:22:33.501094 env[1320]: 2024-12-13 14:22:33.492 [WARNING][5266] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" HandleID="k8s-pod-network.3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Workload="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:33.501094 env[1320]: 2024-12-13 14:22:33.492 [INFO][5266] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" HandleID="k8s-pod-network.3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Workload="localhost-k8s-coredns--76f75df574--hzwcj-eth0" Dec 13 14:22:33.501094 env[1320]: 2024-12-13 14:22:33.494 [INFO][5266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 14:22:33.501094 env[1320]: 2024-12-13 14:22:33.497 [INFO][5259] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c" Dec 13 14:22:33.501094 env[1320]: time="2024-12-13T14:22:33.499382225Z" level=info msg="TearDown network for sandbox \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\" successfully" Dec 13 14:22:33.505026 env[1320]: time="2024-12-13T14:22:33.504994871Z" level=info msg="RemovePodSandbox \"3903db18b1ab54395d637be52e862390cd980ebdf98eb906d49f4d35db144e0c\" returns successfully" Dec 13 14:22:34.674000 audit[5281]: NETFILTER_CFG table=filter:121 family=2 entries=8 op=nft_register_rule pid=5281 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:34.674000 audit[5281]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffe9543060 a2=0 a3=1 items=0 ppid=2387 pid=5281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:34.674000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:34.677017 sshd[5126]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:34.679460 systemd[1]: Started sshd@18-10.0.0.138:22-10.0.0.1:46762.service. Dec 13 14:22:34.677000 audit[5126]: USER_END pid=5126 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:34.677000 audit[5126]: CRED_DISP pid=5126 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:34.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.138:22-10.0.0.1:46762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:34.682976 systemd-logind[1303]: Session 18 logged out. Waiting for processes to exit. Dec 13 14:22:34.683092 systemd[1]: sshd@17-10.0.0.138:22-10.0.0.1:46748.service: Deactivated successfully. Dec 13 14:22:34.683863 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 14:22:34.682000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.0.138:22-10.0.0.1:46748 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:34.684312 systemd-logind[1303]: Removed session 18. Dec 13 14:22:34.684000 audit[5281]: NETFILTER_CFG table=nat:122 family=2 entries=22 op=nft_register_rule pid=5281 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:34.684000 audit[5281]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffe9543060 a2=0 a3=1 items=0 ppid=2387 pid=5281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:34.684000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:34.703000 audit[5287]: NETFILTER_CFG table=filter:123 family=2 entries=20 op=nft_register_rule pid=5287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:34.703000 audit[5287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=11860 a0=3 a1=ffffe6439f20 a2=0 a3=1 items=0 ppid=2387 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:34.703000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:34.707000 audit[5287]: NETFILTER_CFG table=nat:124 family=2 entries=22 op=nft_register_rule pid=5287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:34.707000 audit[5287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6540 a0=3 a1=ffffe6439f20 a2=0 a3=1 items=0 ppid=2387 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:34.707000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:34.725000 audit[5282]: USER_ACCT pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:34.726590 sshd[5282]: Accepted publickey for core from 10.0.0.1 port 46762 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:34.726000 audit[5282]: CRED_ACQ pid=5282 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:34.727000 audit[5282]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd5c5ec20 a2=3 a3=1 items=0 ppid=1 pid=5282 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:34.727000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:34.728162 sshd[5282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:34.731903 systemd-logind[1303]: New session 19 of user core. Dec 13 14:22:34.732498 systemd[1]: Started session-19.scope. Dec 13 14:22:34.735000 audit[5282]: USER_START pid=5282 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:34.736000 audit[5289]: CRED_ACQ pid=5289 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:35.002633 sshd[5282]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:35.003000 audit[5282]: USER_END pid=5282 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:35.003000 audit[5282]: CRED_DISP pid=5282 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:35.005284 systemd[1]: Started sshd@19-10.0.0.138:22-10.0.0.1:46766.service. Dec 13 14:22:35.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.138:22-10.0.0.1:46766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:35.012911 systemd-logind[1303]: Session 19 logged out. Waiting for processes to exit. Dec 13 14:22:35.013082 systemd[1]: sshd@18-10.0.0.138:22-10.0.0.1:46762.service: Deactivated successfully. Dec 13 14:22:35.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.0.138:22-10.0.0.1:46762 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:35.013972 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 14:22:35.015004 systemd-logind[1303]: Removed session 19. Dec 13 14:22:35.049000 audit[5297]: USER_ACCT pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:35.050895 sshd[5297]: Accepted publickey for core from 10.0.0.1 port 46766 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:35.050000 audit[5297]: CRED_ACQ pid=5297 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:35.050000 audit[5297]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=fffffa31c590 a2=3 a3=1 items=0 ppid=1 pid=5297 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:35.050000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:35.052015 sshd[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:35.056296 systemd[1]: Started session-20.scope. Dec 13 14:22:35.056674 systemd-logind[1303]: New session 20 of user core. Dec 13 14:22:35.060000 audit[5297]: USER_START pid=5297 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:35.061000 audit[5302]: CRED_ACQ pid=5302 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:35.181929 sshd[5297]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:35.181000 audit[5297]: USER_END pid=5297 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:35.182000 audit[5297]: CRED_DISP pid=5297 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:35.184366 systemd[1]: sshd@19-10.0.0.138:22-10.0.0.1:46766.service: Deactivated successfully. Dec 13 14:22:35.183000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.0.138:22-10.0.0.1:46766 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:35.185354 systemd-logind[1303]: Session 20 logged out. Waiting for processes to exit. Dec 13 14:22:35.185438 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 14:22:35.186577 systemd-logind[1303]: Removed session 20. Dec 13 14:22:39.731000 audit[5316]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:39.733138 kernel: kauditd_printk_skb: 57 callbacks suppressed Dec 13 14:22:39.733224 kernel: audit: type=1325 audit(1734099759.731:540): table=filter:125 family=2 entries=20 op=nft_register_rule pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:39.731000 audit[5316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd3edec00 a2=0 a3=1 items=0 ppid=2387 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:39.738752 kernel: audit: type=1300 audit(1734099759.731:540): arch=c00000b7 syscall=211 success=yes exit=2932 a0=3 a1=ffffd3edec00 a2=0 a3=1 items=0 ppid=2387 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:39.738807 kernel: audit: type=1327 audit(1734099759.731:540): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:39.731000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:39.741000 audit[5316]: NETFILTER_CFG table=nat:126 family=2 entries=106 op=nft_register_chain pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:39.741000 audit[5316]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffd3edec00 a2=0 a3=1 items=0 ppid=2387 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:39.748344 kernel: audit: type=1325 audit(1734099759.741:541): table=nat:126 family=2 entries=106 op=nft_register_chain pid=5316 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Dec 13 14:22:39.748389 kernel: audit: type=1300 audit(1734099759.741:541): arch=c00000b7 syscall=211 success=yes exit=49452 a0=3 a1=ffffd3edec00 a2=0 a3=1 items=0 ppid=2387 pid=5316 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:39.748409 kernel: audit: type=1327 audit(1734099759.741:541): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:39.741000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Dec 13 14:22:40.185086 systemd[1]: Started sshd@20-10.0.0.138:22-10.0.0.1:46772.service. Dec 13 14:22:40.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.138:22-10.0.0.1:46772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:40.188881 kernel: audit: type=1130 audit(1734099760.184:542): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.138:22-10.0.0.1:46772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:40.227000 audit[5318]: USER_ACCT pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:40.228721 sshd[5318]: Accepted publickey for core from 10.0.0.1 port 46772 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:40.230184 sshd[5318]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:40.228000 audit[5318]: CRED_ACQ pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:40.234775 kernel: audit: type=1101 audit(1734099760.227:543): pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:40.234860 kernel: audit: type=1103 audit(1734099760.228:544): pid=5318 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:40.234888 kernel: audit: type=1006 audit(1734099760.229:545): pid=5318 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Dec 13 14:22:40.229000 audit[5318]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffed81b880 a2=3 a3=1 items=0 ppid=1 pid=5318 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:40.229000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:40.239387 systemd-logind[1303]: New session 21 of user core. Dec 13 14:22:40.240092 systemd[1]: Started session-21.scope. Dec 13 14:22:40.243000 audit[5318]: USER_START pid=5318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:40.244000 audit[5321]: CRED_ACQ pid=5321 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:40.385663 sshd[5318]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:40.385000 audit[5318]: USER_END pid=5318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:40.385000 audit[5318]: CRED_DISP pid=5318 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:40.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.0.138:22-10.0.0.1:46772 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:40.388318 systemd-logind[1303]: Session 21 logged out. Waiting for processes to exit. Dec 13 14:22:40.388484 systemd[1]: sshd@20-10.0.0.138:22-10.0.0.1:46772.service: Deactivated successfully. Dec 13 14:22:40.389489 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 14:22:40.390091 systemd-logind[1303]: Removed session 21. Dec 13 14:22:44.573285 kubelet[2223]: E1213 14:22:44.573256 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:45.388971 systemd[1]: Started sshd@21-10.0.0.138:22-10.0.0.1:39602.service. Dec 13 14:22:45.388000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.138:22-10.0.0.1:39602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:45.389874 kernel: kauditd_printk_skb: 7 callbacks suppressed Dec 13 14:22:45.389941 kernel: audit: type=1130 audit(1734099765.388:551): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.138:22-10.0.0.1:39602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:45.435000 audit[5332]: USER_ACCT pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:45.436937 sshd[5332]: Accepted publickey for core from 10.0.0.1 port 39602 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:45.438460 sshd[5332]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:45.437000 audit[5332]: CRED_ACQ pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:45.442214 kernel: audit: type=1101 audit(1734099765.435:552): pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:45.442272 kernel: audit: type=1103 audit(1734099765.437:553): pid=5332 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:45.442295 kernel: audit: type=1006 audit(1734099765.437:554): pid=5332 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Dec 13 14:22:45.437000 audit[5332]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7eb8970 a2=3 a3=1 items=0 ppid=1 pid=5332 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:45.447289 kernel: audit: type=1300 audit(1734099765.437:554): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe7eb8970 a2=3 a3=1 items=0 ppid=1 pid=5332 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:45.447351 kernel: audit: type=1327 audit(1734099765.437:554): proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:45.437000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:45.446900 systemd-logind[1303]: New session 22 of user core. Dec 13 14:22:45.447778 systemd[1]: Started session-22.scope. Dec 13 14:22:45.451000 audit[5332]: USER_START pid=5332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:45.452000 audit[5335]: CRED_ACQ pid=5335 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:45.458478 kernel: audit: type=1105 audit(1734099765.451:555): pid=5332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:45.458532 kernel: audit: type=1103 audit(1734099765.452:556): pid=5335 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:45.613103 sshd[5332]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:45.613000 audit[5332]: USER_END pid=5332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:45.616473 systemd-logind[1303]: Session 22 logged out. Waiting for processes to exit. Dec 13 14:22:45.616670 systemd[1]: sshd@21-10.0.0.138:22-10.0.0.1:39602.service: Deactivated successfully. Dec 13 14:22:45.617619 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 14:22:45.614000 audit[5332]: CRED_DISP pid=5332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:45.618058 systemd-logind[1303]: Removed session 22. Dec 13 14:22:45.620460 kernel: audit: type=1106 audit(1734099765.613:557): pid=5332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:45.620525 kernel: audit: type=1104 audit(1734099765.614:558): pid=5332 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:45.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.0.138:22-10.0.0.1:39602 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:47.573634 kubelet[2223]: E1213 14:22:47.573596 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:50.573227 kubelet[2223]: E1213 14:22:50.573197 2223 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 14:22:50.616027 systemd[1]: Started sshd@22-10.0.0.138:22-10.0.0.1:39618.service. Dec 13 14:22:50.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.138:22-10.0.0.1:39618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:50.616889 kernel: kauditd_printk_skb: 1 callbacks suppressed Dec 13 14:22:50.616948 kernel: audit: type=1130 audit(1734099770.615:560): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.138:22-10.0.0.1:39618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 14:22:50.657000 audit[5355]: USER_ACCT pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:50.659764 sshd[5355]: Accepted publickey for core from 10.0.0.1 port 39618 ssh2: RSA SHA256:/HJyHm5Z3TKV0xVrRefgtheJNUHxRnoHBht1EzpqsE0 Dec 13 14:22:50.661130 sshd[5355]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 14:22:50.659000 audit[5355]: CRED_ACQ pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:50.665597 kernel: audit: type=1101 audit(1734099770.657:561): pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:50.665671 kernel: audit: type=1103 audit(1734099770.659:562): pid=5355 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:50.667708 kernel: audit: type=1006 audit(1734099770.659:563): pid=5355 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Dec 13 14:22:50.659000 audit[5355]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd081d730 a2=3 a3=1 items=0 ppid=1 pid=5355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:50.669899 systemd-logind[1303]: New session 23 of user core. Dec 13 14:22:50.670289 systemd[1]: Started session-23.scope. Dec 13 14:22:50.671235 kernel: audit: type=1300 audit(1734099770.659:563): arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd081d730 a2=3 a3=1 items=0 ppid=1 pid=5355 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 14:22:50.659000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:50.672372 kernel: audit: type=1327 audit(1734099770.659:563): proctitle=737368643A20636F7265205B707269765D Dec 13 14:22:50.672000 audit[5355]: USER_START pid=5355 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:50.677000 audit[5358]: CRED_ACQ pid=5358 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:50.680958 kernel: audit: type=1105 audit(1734099770.672:564): pid=5355 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:50.681004 kernel: audit: type=1103 audit(1734099770.677:565): pid=5358 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:50.790551 sshd[5355]: pam_unix(sshd:session): session closed for user core Dec 13 14:22:50.790000 audit[5355]: USER_END pid=5355 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:50.793343 systemd[1]: sshd@22-10.0.0.138:22-10.0.0.1:39618.service: Deactivated successfully. Dec 13 14:22:50.794472 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 14:22:50.794478 systemd-logind[1303]: Session 23 logged out. Waiting for processes to exit. Dec 13 14:22:50.790000 audit[5355]: CRED_DISP pid=5355 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:50.795304 systemd-logind[1303]: Removed session 23. Dec 13 14:22:50.797872 kernel: audit: type=1106 audit(1734099770.790:566): pid=5355 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:50.797936 kernel: audit: type=1104 audit(1734099770.790:567): pid=5355 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Dec 13 14:22:50.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.0.138:22-10.0.0.1:39618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'